Chapter 13 – Sign Language: A Window to Language Evolution

Wikis > Chapter 13 - Sign Language: A Window to Language Evolution

1.Introduction

Our aim is to present some insights on language evolution gained through the study of sign language. We address the general evolution of language first. Then we compare spoken and sign language and conclude that there they do not differ apart from modality. Finally, we ask what sign language can tell us about language evolution, by looking primarily at emerging sign languages.

2. Evolution of sign language

2.1 Evolution of spoken language

2.1.1 From gestures to co-speech

The origin of human language has always been a debated topic. One hypothesis is that spoken language originally derived from gestures (Kendon, 2016; Armstorng, Stokoe & Wilcox, 1995 & Corballis, 2002). Gestures, a universal feature of human communication, functions as a visual language when verbal expression is temporally disrupted. It is often used for dyadic interactions. In fact, language areas in the brain, such as Broca’s area, are particularly active when observing gestural communication (Paulesu, Frith, & Frackowiak, 1993 & Zatorre et al., 1992). Thus, this is in line with Condillac’s (1746) suggestion that spoken language developed from gestures to speech.

However, it is worthy to note that spoken language does not evolve immediately from gestures. Rather, co-speech gestures (where speech is more vocalizations rather than actual speech here) sets in before spoken language. This was evident in early language acquisition in children where word comprehension and production occur after nine months of age whereas intentional control of the hands and babbling occur before eight months of age (Rochat, 1989; Iverson & Thelen, 1999 & Vauclair, 2004).  Bernardis and Gentilucci (2006) also highlighted that sound production was incorporated into the gestural communicative system of male adults when interacting with objects or when meaning of abstract gestures were required. Sounds can provide more salience to the gestures when communicating information to other interlocutors. Thus, the use of gestures with sound confers an advantage in the creation of a richer vocabulary.

Furthermore, speech and gestures share common neural substrates. Not only was there a tendency to produce gestures at the same time as the spoken word, information conveyed verbally was also reduced as gestures supplement it, and vice-versa. As such, it is evident that vocalizing words and displaying symbolic gestures with the same meaning are controlled by a single communication system.

2.1.2 From co-speech gestures to spoken language

While it may seem that co-speech gestures provided a more enhanced communication system, co-speech gestures have limitations. Steklis and Harnad (1976) mentioned them as follows:

  • Gestures are of no use in the dark, or across partitions
  • Inability to refer to the absence of the referent or past / future
  • Eyes and hands are occupied
  • Slow and inefficient when
    > Several people are communicating
    >Crucial information is immediately required

These limitations could have led to increasing reliance on spoken language. Furthermore, it was emphasized that gestures were already somewhat arbitrary by this time. Thus, spoken language became the obvious solution to the above mentioned limitations.

Spoken language evolution is also closely linked to biological evolution. Fitch (2000) highlighted that control over vocalization was largely due to the modification of the vocal tract in two positions. A slow descent of the larynx to the adult position was seen from babies of three months of age is said to have an impact on vocalization (Sasaki et al., 1977). It allowed more room for the human tongue to move vertically and horizontally within the vocal tract (Lieberman et al., 1969). On the other hand, the importance of tongue and lips in constricting the airways also played a role in vocalization (Boe et al., 2017). Furthermore, increased brain complexity allowed creation of more complex meanings through combinations of sounds (Jackendoff, 2006). This developed into more complex linguistic elements such as lexicons and sentences.

2.2 Sign Language

Spoken language is natural and ubiquitous. But, how about people who are unable to communicate verbally or have hearing impairments? Some create their own language: a sign language.

According to StartASL.com (2017), Aristotle was the first to have a claim recorded about the deaf. He theorised that people can only learn through hearing spoken language. Deaf people were therefore disadvantaged and were unable to learn or be educated. This led to the first documentation of sign language in 15th century AD which was said to be French Sign Language (FSL). Then, different sign languages such as American Sign Language (ASL) began to emerge based on FSL because of the necessity for the language to communicate and educate deaf people. However, some sign languages emerged organically without being modeled on earlier sign languages such as Nicaraguan Sign Language (NSL).

3. Comparison between sign language and spoken language

In this section, sign language and spoken language will be compared to determine whether it is fair to use sign language as a reflection of a general ability for human language.

3.1 Similarities

Sign and spoken language are processed in the same regions of the brain. This was evident in a study conducted by Marshall et. al (2005) on Maureen, a bilingual in  British Sign Language and English, who also has Wernicke’s aphasia. Wernicke’s aphasia is often characterized by the impairment in language comprehension. When tested on the comprehension of written and finger-spelt words, both were found to be equally impaired.

The development of sign language, similar to spoken language, starts from iconic and slowly becomes arbitrary. Deaf babies use their hands to babble (Petito & Marentette, 1991), aided by gestures like pointing just like hearing children. They later learn the arbitrary signs for concepts i.e actual signs.

Both Sign and spoken language are characterized by Hockett’s design features (Hockett,1960). Some of them are:

1) Duality of patterning: Ability of human languages, both signed and spoken, to form discrete meaningful units

2)    Displacement: The ability to talk about things not in the here and now

3)    Arbitrary: No inherent link between symbol and referent

4)    Semanticity: Having words referring to concepts

5) Reflexiveness: Ability to use language to talk about itself

3.2 Differences

Söderfeldt et al. (1997) did an experiment to determine whether there is a difference between the two language inputs. In spoken language, the auditory cortex is more activated. Whereas in sign language, the visual cortex is more activated. No other significant differences in the brain were found. Thus the only difference between sign language and spoken language is their modality. Hence we conclude that it is fair to extrapolate insights on the evolution of sign language to spoken language.

4. What sign languages tell us about language evolution

Some insights on language evolution can be gleaned from studying sign language, especially emerging sign languages.

4.1 Sign language in general


Studying the fingerspelling development of deaf children can potentially explain how Hockett’s ‘duality of patterning’ feature arose. Duality of patterning is the ability to create meaningful units (utterances) from non-meaningful units (phones or individual sounds like /p/ in ‘pat’). Research shows that deaf children initially treat finger-spelt words as lexical items rather than a series of letters representing English orthography. They begin to link handshapes to English graphemes only at around age 3 (Humphries & MacDougall, 2000). However, this link is not based on phonological understanding of the alphabetical system. Rather, they view it as visual representations of a concept. This will be illustrated further using a fingerspelling experiment done with deaf children. In an experiment by Akamatsu (1985), some children produced correct handshapes (shape of the fingerspelled alphabet) but wrong spellings while others had correct spellings but wrong handshapes. In the former, the order of spelling is not important to them as long as all or most of the elements in a word are present (refer to Fig. 1 and 2). In the latter, the deaf children blend together letters from the manual alphabet instead of spelling them individually thus affecting the shape of the fingerspelled alphabet. Thus, he proposed that the children were analyzing fingerspelling as a complex sign rather than a combinations of letters. Only at about 6 years of age, when they started going to school, were they able to understand the rule that governs spelling. From this, we draw that humans do not naturally produce these alphabets or phones as a precursor to language use. In fact, it is the converse; we develop language first before realising that it could be broken down into smaller units. This could also hold true for language evolution. Long vocalisations could have come first.  Then people realised that they could be broken up and recombined to form other meanings.

Fig.1: Fingerspelling of the word ‘Ice’

Fig.2: Wrong order of fingerspelling for the word ‘Ice’

4.2 Emerging Sign Languages

Research into emerging sign languages may give us a glimpse of how early human language could have evolved as we can closely observe the inception and development of a new language. It tells us why and how language could have evolved, given a fully developed human brain and physiology.

With this is mind, what then does emerging sign languages tell us about language evolution? First, there must be a community of people who communicate in that language. This community must consist of more than just a few people. A family unit is thus insufficient. Home signs can develop in a family of at least one deaf signer and other hearing members. But it will not evolve into a language. A larger group of users is needed for language to evolve. One reason for this is the need for at least two generations of signers for a rudimentary sign system to evolve into a language (Senghas, Senghas & Pyers, 2005). The first generation provides a shared symbolic environment, or vocabulary. The second generation then uses the signs created by the first generation in a more systematic manner and develop a grammatical system. Hence, we can postulate that a proto-language needed to have been used by a community of people for it to eventually develop into a full-blown language such as English. However, there is no known fixed number of speakers that ensures these rudimentary systems eventually evolve into  languages.

Second, a shared vocabulary develops first before grammar. This is seen when observing first generation users of emerging sign languages. They have a strong tendency to use only one nominal in each sentence. For example, to convey ‘A girl feeds a woman’, first generation signers would instead sign ‘WOMAN SIT’, ‘GIRL FEED’. This can be attributed to the lack of grammar to differentiate between Subject (‘A girl’) and Object (‘a woman’) in the first generation (Meir, Sandler, Padden & Aronoff, 2010). Thus, language could be said to have developed word first and grammar develops slowly afterwards.

Third, words could have started as iconic and gradually become more arbitrary. This is seen in emerging sign languages. Younger signers (2nd generation onwards) simplify signs created by the first generation (Erard, 2005). For example, as shown in Fig 3., the sign ‘MAN’ among older Al-Sayyid Bedouin Sign Language (ABSL) signers is the gesture of a mustache twirl, presumably because men have mustaches. But for younger signers, it is merely twisting the index finger at the upper lip. Thus, signs that were initially iconic gestures start to lose certain characteristics that represent the physical objects they embody and become more arbitrary due to minimization of movement. This could also have happened to spoken languages where words may originally have been more onomatopoetic but were simplified by later generations until they bear little or no resemblance to the actual referents.

Fig. 3: Simplification of the sign for ‘MAN’ in ABSL

Moreover, the way and rate at which a language evolves may be influenced by the community. For example, NSL and ABSL are emerging sign languages. Yet, both have markedly different grammars. NSL, similar to more developed sign languages, has a rich inflection system which reduces its reliance on a strict word order (Erard, 2005). An example of inflections in NSL is how signing two actions slightly to the side instead of forward indicates that the same person is doing both actions (Senghas, Senghas & Pyers, 2005). Conversely, ABSL does not have rich inflections and instead relies on strict word order to avoid ambiguity (Fox, 2007). Some attribute this to the nature of the two communities. NSL is considered to be a deaf community sign language whereas ABSL is a village sign language. This means that ABSL developed among people of the same village. Whereas NSL developed among a group of people who lived in different areas but were brought together for some reason, usually for education. Thus, NSL has a bigger, more diverse community than ABSL and also has more members joining the community each year. This may be the cause of NSL’s more accelerated grammatical development. Interestingly, while all emerging sign languages are developing syntax, there is no single path to its development. ABSL relies heavily on word order while Israeli Sign Language (ISL) is developing verb agreement and relies less on word order. Despite being emerging sign languages, both are not developing the same areas of syntax or grammar. Conversely, they are going in totally opposite directions with regards to word order and morphology. This could imply that different languages evolved differently from the very start.

5.Conclusion

The debate over the origins of human language is highly contentious. One theory hypothesizes that language evolved gestures. Over time, speech may have supplanted gestures because of its convenience.

An interesting opportunity to explore the links between gesture and speech come from the study of sign languages. While many are derived from existing developed sign languages, some have recently emerged spontaneously without a template.

A sign language is undeniably a human language. It uses the same language areas of the brain and is learnt by children in analogous ways to spoken language. Most importantly, it displays all of Hockett’s design features that characterizes human language and differentiates them from animal communication.

By studying sign languages, some insights on language evolution can be gained, such as how ‘duality of patterning’ emerged. In addition, a community is needed and its characteristics can potentially affect language development. We also learnt that vocabulary most likely came before grammar and words gradually became less iconic and more arbitrary with time. But, we have to note that the specific features of any given sign language cannot be completely predicted as there is no single way for a language to develop.

References

Akamatsu, C. (1985). Fingerspelling formulae: A word is more or less than the sum of its letters. In W.Stokoe & V.Volterra (Eds.), SLR’83: Sign language research (pp.126-132). Silver Spring, MD: Linstok Press.

Armstrong, D. F., Stokoe, W. C. & Wilcox, S.E. (1995). Gesture and the nature of language. Cambridge, UK: Cambridge University Press

Boë, L. J., Berthommier, F., Legou, T., Captier, G., Kemp, C., Sawallis, T. R., Becker, Y, Rey, A. & Fagot, J. (2017). Evidence of a Vocalic Proto-System in the Baboon (Papio papio) Suggests Pre-Hominin Speech Precursors. PloS one, 12(1), e0169321.

Bernardis P. & Gentilucci, M. (2006). Speech and gesture share the same communication system. Neuropsychologia, 44, 178-190

Condillac, E. B. (1746). Essai sur l’origine des connaissances humaines. Paris

Corballis, M.C. (2002). From hand to mouth: The origins of language. Princeton, NJ: Princeton University Press

Erard, M. (2005). A language is born. New Scientist, 188(2522), 46-49.

Fitch, W. T. (2000). The evolution of speech: a comparative review. Trends in cognitive sciences, 4(7), 258-267.

Fox, M. (2007). VILLAGE OF THE DEAF. Discover, 28(7), 66.

Hockett, C. F. (1960). The origins of speech. Scientific American, 203, 89–96.

Humphries, T., and F. MacDougall (2000). ‘‘Chaining’’ and Other Links: Making Connections between American Sign Language and English in Two Types of School Settings. Visual Anthropology Review, 15(2), 84-94.

Iverson, J. M., & Thelen, E. (1999). Hand, mouth, and brain: The dynamic emergence of speech and gesture. Journal of Consciousness Studies, 6, 19–40.

Jackendoff, R. (2006). How did language begin?. Linguistic Society of America.

Kendon, A. (2016). Reflections on the “gesture-first” hypothesis of language origins. Psychonomic Bulletin & Review, 1-8.

Lieberman, P. et al. (1969) Vocal tract limitations on the vowel repertoires of rhesus monkeys and other nonhuman primates. Science, 164 (3884), 1185–1187

Marshall, J., Atkinson, J., Woll, B., & Thacker, A. (2005). Aphasia in a bilingual user of British sign language and English: Effects of cross-linguistic cues. Cognitive Neuropsychology, 22, 719–736.

Maxwell, M. (1988). The alphabetic principle and fingerspelling. Sign Language Studies, 59, 377–404.

Meir, I., Sandler, W., Padden, C., & Aronoff, M. (2010). Emerging Sign Languages. The Oxford Handbook of Deaf Studies, Language, and Education,2. doi:10.1093/oxfordhb/9780195390032.013.0018

Paulesu, E., Frith, C. D., & Frackowiak, R. S. (1993). The neural correlates of the verbal component of working memory. Nature, 362(6418), 342–345.

Petito, L.A. & Marentette, P.F (1991). Babbling in the manual mode: evidence for the ontegeny of language. Science, 251, 1493-1496

Rochat, P. (1989). Object manipulations and exploration in 2- to 5-month-old infants. Developmental Psychology, 25, 871–884.

Sasaki, C.T. et al. (1977) Postnatal descent of the epiglottis in man. Arch. Otolaryngol, 103, 169–171

So¨derfeldt, B., Ingvar, M., Ronnberg, J., Eriksson, L., Serrander, M. & Stone-Elander, S. (1997) Signed and spoken language perception studied by positron emission tomography. Neurology, 49, 82–87.

Senghas, R. J., Senghas, A. & Pyers, J. E. (2004). The emergence of Nicaraguan Sign Language: Questions of development, acquisition and evolution. In J. Langer, S. T. Parker , & C. Milbrath (Eds), Biology and knowledge revisited: From neurogenesis to psychogenesis (pp. 287 – 306). Mahwah, NJ: Lawrence Erlbaum Associates.

StartASL (2017). History of sign language – Deaf history. Retrieved from https://www.startasl.com/history-of-sign-language_html

Steklis, H.D. & Harnad, S. (1976) From hand to mouth: Some critical stages in the evolution of language. Annals of the New York Academy of Science, 280, 445-455

Vauclair, J. (2004). Lateralization of communicative signals in nonhuman primates and the hypothesis of the gestural origin of language. Interaction Studies, 5(3), 365-386.

Zatorre, R. J., Evans, A. C., Meyer, E., & Gjedde, A. (1992). Lateralization of phonetic and pitch discrimination in speech processing. Science, 256(5058), 846–849.

Category: