Category Archives: InfoSheet

Are Bilingual Children Better at Cross-Modal Lexical Mapping?

Multisensory Nature of Speech: As humans, we like to maintain eye contact with the person we are engaged with in a conversation, and infants do this too! In fact, we are innately born with the tendency to understand the multisensory aspects of speech. Previous studies have shown that infants selectively direct their attention towards faces that are talking and often pay a lot of attention to the orofacial, or to the mouth and the face, of people they are socially attending to. These orofacial cues help infants to orient themselves to the speech they are hearing and learn how their native language works. 

Bilingualism, according to some research, has an edge in helping infants attend to the visual aspects of speech. In a study by Birulés et al. (2018), 8- to 18-month-old bilingual infants express a greater interest in orofacial cues than monolingual peers. Complementary research also suggests that bilingual infants are more adept at telling different languages apart in a visual manner. So, children exposed to bilingual environments are more responsive to visible aspects of speech than monolingual children, but does this difference impact how they learn new words? What about other sensory modalities, such as auditory aspects of speech? 

The Study: In a study conducted by Havy and Zesiger, 30-month-old bilingual children were tested on their capability in identifying new or novel words taught to them either auditorily or visually. This is called “lexical mapping”, in which a new word (e.g., ‘var’) is presented with either an auditory representation (hearing an audio recording of the pronunciation of ‘var’) or visual representation (seeing an object presented alongside a silent talking face pronouncing ‘var’). The children were then tested on their ability to identify the word (“Look at the ‘var’!”).

The bilingual children were first split into one of two learning conditions: either the auditory modality or the visual modality. Children from both modalities go through the familiarisation phase, where they hear a video recording of a person saying the novel word (‘var’). Then, in the learning phase, the children in the auditory modality group hear the new word and see a black screen, while the children in the visual modality group see a visual display of the new word (represented by an object on screen) and see the video recording from before, but in silence. In the testing phase, both the auditory and visual modality groups were tested for their ability to recognise the word, in both the same modality (auditory group presented with an auditory recording) and cross-modality (auditory group presented with a silent video visual recording). 

Results: Findings from the study suggest that bilinguals are better at learning cross-modal representations of new words that were learned in a visual manner. This means that bilingual children are better able at recognising words spoken auditorily when they were previously only exposed to it visually than monolingual children are. 

So, what does this all mean, and how does this relate to our Singaporean parents and children? 

As a result of enhanced sensitivity to orofacial speech cues (or “hints” when we use our mouth and face), bilinguals are more confident in recognising representations in other modal forms. Perhaps a tip for our Singaporean parents is to exaggerate one’s facial features when speaking words that are difficult to grasp, to help our children better pick up the language (and perhaps try this in the less dominant language). Additionally, cross-modality testing for words can be a good test for how confident one is in the language.

This post was written by our intern, Shi Fan, and edited by our lab manager Fei Ting and Research Fellow Rui Qi.

Reference(s):

Birulés, J., Bosch, L., Brieke, R., Pons, F., & Lewkowicz, D. J. (2018). Inside bilingualism: Language background modulates selective attention to a talker’s mouth. Developmental Science, 22(3), e12755. DOI: https://doi.org/10.1111/desc.12755 

Havy, & Zesiger, P. E. (2021). Bridging ears and eyes when learning spoken words: On the effects of bilingual experience at 30 months. Developmental Science, 24(1), e13002. DOI: https://doi.org/10.1111/desc.13002

Hearing it like I’m Seeing it

Photo by Volodymyr Hryshchenko on Unsplash

Imagine that it is 11.30am. It is almost lunch break. You are in a meeting with 20 other colleagues with your boss, and your stomach is grumbling. Your colleague sitting opposite you mouths the sentence, “What do you want for lunch?” while pointing to her clock. You mouth back, “Pizza ok?”. Your colleague nods. 

Back at home, you are with your child. It is 7.30pm. Your child is watching the newest episode of Blue’s Clues on the television at full volume. He looks at you and you ask, “What do you want for dinner?”. Because your voice is muffled from the television, your child looks at you blankly, not knowing what you said. 

Why does this happen? How did your colleague know exactly what you said just by mouthing, but not your child? A simple answer is: Multisensory (visual-speech) processing

Wait, did you say pea or pee? In essence, visual-speech processing involves the integration of information of what you hear and see at the same time. A relatively large field of research stemming from the “McGurk Effect”*, this has been known to explain why sometimes you mishear words based on how someone mouths it at the same time. Seeing the movement of the lips and mouth helps to facilitate what you hear. 

Developmentally, there are differences in the way you and your child would integrate such information with varying levels of integration such as:

  1. Telling you when the word starts (when did the mouth start to open?)
  2. Telling you what sounds the person is trying to make (did the lips round to make an “oo” sound?)
  3. Telling you the actual word that the speaker said based on what you expect (did my colleague say pairs of sunglasses or pears of sunglasses?)

This integration really works in your brain. In a recent research study, 2 groups of children (aged 8-9, and 11-12) were compared with adults (aged 18-37) in an audiovisual task to investigate what happens in the brain when matching the visualization of mouth shapes and sounds together, also known as the Speech-in-Noise perception task.

An example of a trial in the Speech-in-Noise perception task

What they found was quite interesting. Firstly, they found that younger children performed slightly poorer and reacted slower compared to their older children and adult counterparts. Secondly, brain waves that were responsible for this integration between the visualization of mouth shapes and sounds heard (also called the N400) contributed to the higher accuracy in adults. Conversely, while the group with older children also performed well, the brain waves that contributed to their performance were different (also called the late positive complex, or LPC). 

Wait… so will my child know what I’m saying even? Not to fret. Your child knows what you’re saying, but they just use a different mechanism to do so. Compared to adults, children just need to see the entire word articulation more carefully from the face to properly match the word they hear. Suggestibly, these brain mechanisms mature over time such that when children are older, they can match what they see to what they hear more efficiently. As time goes on, children can also best predict what you (or their friends) are trying to say when you need to be quiet and can only mouth your sentences to them. 

This post written by our intern Cameron and edited by our research fellow Rui Qi.

Reference: 

Kaganovich, N., & Ancel, E. (2019). Different neural processes underlie visual speech perception in school-age children and adults: An event-related potentials study. Journal of Experimental Child Psychology, 184, 98-122. https://doi.org/10.1016/j.jecp.2019.03.009

You might be also interested in this: 
*About the McGurk Effect: McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746-748. https://doi.org/10.1038/264746a0

Unlocking the Power of Bilingual Brains: A Boost for Your Child’s Listening Skills

Have you ever wondered how being bilingual could shape your child’s brain? A fascinating study dives into the impact of bilingualism on the way our brains process sounds, and the findings might amaze you!

Image by jcomp on Freepik

Bilingual Advantage Unveiled

Scientists use various methods to study the mysteries of the mind. One such method involves biomarkers, which are measurements we can make about the body that are like clues that help researchers understand what’s happening, for example, inside the brain. In this study, researchers are interested in a process called subcortical processing, which involves how the brain handles information below the surface of the brain. By examining biomarkers related to subcortical processing, scientists can gain valuable insights into how our brains work and how factors like bilingualism can impact this important aspect of brain function.

The study, involving 41 young adults, explored the effects of bilingualism on the brain’s subcortical processing, especially in different listening situations. Out of the participants, 22 were bilingual and 19 were monolingual, all aged 18 to 25.  The researchers measured participants’ fluency in their language(s) and recorded Auditory Brainstem Responses (ABRs) to uncover some intriguing insights. [As a side note, the brainstem is a part of our central nervous system, it sits at the bottom of the brain, and connects our brain to the spinal cord. ABR can be done by having electrodes put on our scalp to measure the brain activity in response to sounds and it can tell us how the brain’s pathway of hearing is working.]

Bilingual individuals showed shorter response times in their brain’s automatic sound processing, compared to their monolingual counterparts. In simpler terms, being fluent in two languages appears to make the brain more efficient in  making sense of sounds, even in challenging situations like noisy environments.

How Does It Work?

The brain’s ability to adapt and process information is known as neuroplasticity, and this study suggests that bilingualism enhances this phenomenon. Bilingual individuals seem to have more efficient neural pathways connecting different parts of the brain, allowing them to perceive and encode auditory stimuli more effectively. It is like having very efficient expressways that allow us to go from point A to point B and point A detour to point C very quickly!

Looking Ahead: Potential Applications in Healthcare

The study not only provides insights into brain development but also hints at potential clinical applications. The Speech-ABR technique used in this study could become a valuable tool for audiologists, i.e., people who manage hearing problems in adults and children, perform tests on patients with hearing-related concerns and analyse their medical history to diagnose their condition and prescribe the right treatment for them. The technique can also help in identifying neural biomarkers in different populations, including those with auditory processing disorders, older adults, or individuals with sensory hearing loss.

What Does This Mean for Parents?

For parents, this study offers a glimpse into the potential benefits of exposing your child to multiple languages. Results suggest that individuals with exposure to multiple languages, especially from a young age, may enjoy more efficient subcortical processing in their brains. It’s like giving their brains a workout, making them more adept at handling various listening conditions. So, don’t hesitate to encourage language diversity at home; it might just give your child’s brain an extra boost!

Reference:
Koravand, A., Thompson, J., Chénier, G., & Kordjazi, N. (2019). The effects of bilingualism on speech evoked brainstem responses recorded in quiet and in noise. Canadian Acoustics47(2), 23-30. Retrieved from https://jcaa.caa-aca.ca/index.php/jcaa/article/view/3290

This post written by our intern Zhixing and edited by our Research Assistant Shaza and Research Fellow Rui Qi.

How Syntax Awareness Affects Cross-Language Learning

How Syntax Awareness Affects Cross-Language Learning

Happy Lunar New Year! Let’s usher in the Year of the Dragon by learning how Chinese-English bilingual children learn both their languages. 

In a bilingually diverse country such as Singapore, does having two different sets of syntax (or grammar) between dissimilar languages cause ‘confusion’, and is it detrimental in any way? Before getting into the nitty-gritty of things, however, perhaps a crash course (if you’re new to syntax)/refresher on syntax (if you have had previous experience) is in order.

Syntax: A Crash Course

All languages have specific rules on what words go where, and syntax refers to the order of words and phrases in a sentence. This includes topics such as subject-verb agreement, or where the direct object should be placed. By manipulating the order of words, nuances and meanings can also differ. Consider how the position of the word ‘only’ affects the meaning of the following sentences:

–         Only I bake cake. → No one else bakes cake.

–         I only bake cake. → I don’t do anything other than bake cake.

–         I bake only cake. → I don’t bake anything aside from cake.

How then, do children acquire such complex rules, especially in languages where its morphosyntactic rules are very dissimilar?

The Transfer Facilitation Model

Consider English and Chinese. When it comes to basic word order structure, Chinese is thankfully relatively similar to English, with both sharing the canonical word order of Subject-Verb-Object (SVO). Examples include ‘小美(S)唱了(V)一首歌(O)’, and ‘Alice (S) sang (V) a song (O)’. This is important, as syntactic awareness greatly contributes to reading comprehension; children can use their word order knowledge to anticipate future information in their texts, or even infer the meaning of an unfamiliar word. Beyond SVO agreement, however, Chinese and English differ sharply, compared to say English and Italian, which are both alphabetic languages. Grammar in English typically involves word transformation (e.g., ‘-s’ or ‘-es’ is added to nouns to indicate plurality, cat cats (= more than 1 cat), and ‘-d’ or ‘-ed’ is added to verbs to indicate past tense, walk – walked). Compared to this, grammar in Chinese involves adding additional accompanying characters to indicate plurality or past tense (e.g., ‘们’ is added to singular pronouns to show more than one person, and ‘了’ or ‘过’ is added to indicate past tense). Thus, both languages vastly differ in their morphosyntactic structure due to their different language families.

In relation to this, the Transfer Facilitation Model, proposed by Koda (2008), suggests that transfer and learning is more pronounced when target languages are structurally similar than when they are different. This makes sense, and this concept is helpfully put to the test by several researchers who tested two groups of P1 and P3 children in Hong Kong.

The Study

All participants received bilingual education in local primary schools, similar to Singapore. Participants were taught English (L2) as a second language, and were exposed to Chinese (L1, bear in mind, this likely means Cantonese, as the study was done in Hong Kong) since birth, according to parent reports. The study conducted was a longitudinal one, which meant that the children’s syntactic awareness in Chinese and English were assessed over time; once when the study began and another time 1 year later. Results from the study showed that L1-to-L2 syntactic transfer was mediated by L2 syntactic awareness but not L1 reading comprehension – this means that how much their first language (Chinese) assisted their understanding of their second language (English) was decided by how well they knew English grammar in the first place. Being able to comprehend sentences well in Chinese did not necessarily translate to a better understanding of English, due to the difference in their morphosyntactic structures. Word order awareness (recall subject-verb agreement) was more transfer-ready in supporting English reading comprehension, compared to morphosyntactic awareness. As such, bilingual learners needed to gain a minimum level of competence in their L2 before they could use L1 resources to facilitate L2 learning. From this, we can also infer that when we as children learn very different languages, our brains helpfully make connections wherever possible, while simultaneously retaining the different syntax rules for each language.

Application

Parents of Chinese-English bilingual children reading this, do not fret! Although cross-language transfer may be limited to some degree, syntactic awareness in both Chinese and English improved significantly after 1 year, by virtue of the exposure to the languages the children received. Thus, helping your child explore the features of each language – be it through conversation, books, or any other interactive media – is already a great boon for their comprehension abilities. The learning capacities of a child is a wondrous thing; after all, we have already been through it ourselves already. 

This post was written by our intern Jieying, and edited by our Research Fellow Rui Qi.

Reference:
Siu, S. T.-S., & Ho, S. S.-H. (2020). A longitudinal investigation of syntactic awareness and reading comprehension in Chinese-English bilingual children. Learning and Instruction, 67, 101327. https://doi.org/10.1016/j.learninstruc.2020.101327

Multilingual Memories: Rekindling the Language Spark: My Journey of Rediscovery

When I went to Boston for a few months as a child and came back to Singapore, people said my accent changed, but I didn’t think so. I couldn’t hear it myself, but I could feel that the way I was shaping my words was different. I could feel that my words were rounder than usual, and used more of the back of my mouth than I used to. My family noticed, and my friends teased me, but I was oblivious. I hadn’t learned it consciously, yet my tongue remembered. This unconscious mimicry made me fascinated about the unconscious mechanisms of child language learning, and leaves my adult self staring wistfully at my Duolingo notifications, wondering where that magic went.

As a child, I managed to learn Cantonese and Japanese alongside the prescribed Singaporean languages of Mandarin and English, but that facility seems to have since faded. I’ve been trying to learn Hindi for a few months, in order to communicate with my boyfriend’s relatives, and to say that it hasn’t been as easy as my childhood experiences is an understatement. The hardest thing to wrap my mind around has been the introduction to gendered nouns. It becomes even more confusing when one realises the gender of the noun is not carried on the noun itself, but on the accompanying noun modifiers. For example: “She is reading her book” in Hindi is “Vo apni kitab padh rahi hai” – the gender of the owner of the book is carried on the possessive (apni) and the present participle (rahi hai) but not on the pronoun itself (Vo). When I complain of the difficulty, I am met with sympathetic nods and people saying that they simply learn the sentential structure as part of the word when learning the language as children.

Which brings me back to the original question: what causes the unparallelled facility of children in learning languages, and is it possible to replicate it as an adult? The answer to the first part of the questions lies in a confluence of factors: a brain built for exploration, a playground of fearless communication, and the constant symphony of immersion.

The Brain’s Linguistic Jungle Gym: Picture a child’s brain as a vibrant jungle gym of neural pathways, buzzing with activity. Every sound, every interaction, builds new connections, a labyrinth where languages weave and intertwine. Unlike adults whose pathways are paved and settled, children have a brain under construction, a “critical period” where they effortlessly absorb sounds and rhythms, mimicking them with a near-native precision that eludes many adults. Even their vocal cords are more adaptable, morphing to new pronunciations with relative ease.

Fearless Explorers of Words: Probably the bigger issue boils down simply, to shame, which is no doubt more ingrained in adults than children. Even when meeting people who can speak Hindi, I am usually too shy to try, letting my long pauses and perceived awkwardness fill me with panic. By contrast, children approach language with the courage of explorers, unburdened by self-doubt or the pressure of “getting it right.” They babble, experiment, make mistakes without flinching, piecing together the puzzle of communication through trial and error. This playful approach allows them to focus on the fun, the music, the emotional connection, rather than the technicalities. Adults, burdened by the “work” of language learning, sometimes forget this essential element. When learning becomes a chore, rather than a playful exploration, the spark of curiosity dims, and the path to fluency becomes a dusty road.

So, what can I do to recapture the magic of childhood language acquisition? I guess the answer lies in embracing the child within. To immerse myself in the language. Find opportunities for playful interaction, conversation, and laughter. Embrace the mistakes, the stumbles, the silly pronunciations. And most importantly, let go of the pressure to be perfect.

I want to have faith that we can all become linguistic chameleons, and it may take more effort than it did for our younger selves, but I have to learn to see the journey itself as a reward. After all, my attempts to learn the Hindi language is not just about mastering grammar and vocabulary; it’s about finding out more about my boyfriend’s family’s culture, their perspectives, and world. Here’s hoping I can shed my doubts, and let myself go on a wonderful cultural odyssey.

Jin Yi is our Research Assistant, working on the language mixes project. Jin Yi’s languages are English, Mandarin, Cantonese, Japanese and Arabic.

Want to read more of our Multilingual Memories? Click here!

Unlocking Language Learning: The Power of Bilingual Subtitles in TV Shows and Movies

Have you ever wondered how your child’s language skills can be boosted while watching TV shows or movies? A recent study dives into the fascinating world of language learning through subtitles, shedding light on a key tool: bilingual subtitles.

The ability to grasp a wide range of words is vital for language proficiency, especially for those learning a second language. Whether it’s through intentional learning or picking up words incidentally while reading, listening, or watching, building a robust vocabulary is crucial for successful language comprehension. Watching television programs and movies is a common way for language learners to absorb a new language. But what if we told you that the subtitles on the screen could be the key to unlocking a richer vocabulary?

Enter bilingual subtitles, which show both the original language (L1) and the second language (L2) translations simultaneously. The big question is, do these bilingual subtitles really help in learning new words, or do they just distract us from the plot? Researchers set out to explore the effectiveness of bilingual subtitles in helping learners pick up new words. The study reviewed existing knowledge, pointing out that while captions and L1 subtitles can aid vocabulary learning, the jury is still out on bilingual subtitles. The researchers wanted to fill this gap in understanding.

Figure 1: Visual diagram of the research procedure.
From the authors Wang & Pellicer-Sánchez (2022)

To dig deeper, the study used eye-tracking technology to see how our eyes move when faced with bilingual subtitles. Previous studies hinted that viewers might spend more time on bilingual subtitles, but this research aimed to understand if that extra time equates to better vocabulary.

The results are in, and it’s good news for bilingual subtitles enthusiasts! Participants who watched with bilingual subtitles not only improved their vocabulary but outperformed other groups in recalling and recognizing word meanings. It seems like those extra seconds spent reading the subtitles paid off. Eye-tracking results showed that viewers using bilingual subtitles focused more on the translations than the actual target words. This suggests that our eyes are working extra hard to connect the form and meaning of words, providing unique insight into how we process language on screen.

The study explored the relationship between eye movements and vocabulary gains. For the bilingual subtitles group, the time spent reading L2 words predicted their gains in word form recognition and meaning recall. It’s like our eyes are guiding the way to better language skills! The study not only reinforces the benefits of watching TV for language learning but also highlights the potential of bilingual subtitles. As parents, you might want to consider turning on those subtitles to give your child’s language skills an extra boost. However, given that participants of this study were all adults, there is no guarantee that the same results will be  seen in children.

Nonetheless, the next time you sit down for a family movie night,  there is no harm turning on the bilingual subtitles. You might just be giving your child’s language skills a superhero-level upgrade! Happy watching and learning!

References:
Wang, A., & Pellicer‐Sánchez, A. (2022). Incidental vocabulary learning from bilingual subtitled viewing: An eye‐tracking study. Language Learning, 72(3), 765-805. https://doi.org/10.1111/lang.12495

This post was written by our intern, Zhi Xing, and edited by our lab research fellow Rui Qi.

Switch Off, Switch On: The Unique Ability in Your Bilingual Kid

Language planning in Singapore has its origins since the early 1980s – one of which is through the promotion of bilingualism, with the intention to build and prevent the erosion of culture of the heritage groups in Singapore. No doubt, being bilingual has many other benefits not easily identifiable by the public eye, ranging from the enhanced ability to focus on a singular task or switch between tasks more efficiently. However, many of these unique abilities reported in bilinguals are often seen through the actions of adults – but what about the little ones?

Many resources, many talents! Research into bilingualism despite the focus tends to acknowledge one thing – compared to monolinguals, bilinguals require more cognitive effort to master two languages. Scoping even further, acquiring the ability to grasp how non-tone languages (e.g. English) stress on emotion and questions (do you wish to EAT? vs. do YOU wish to eat?) and tone languages (e.g Mandarin) that use pitch to discern information (拔 bá (pluck) vs. 爸 bà (dad)) at the same time does deserve some commendation. As such, this helps one to understand multiple languages in a mixed sentence, using the ability to “switch off” one’s sensitivity to cues of the first language while “switching on” the second one almost simultaneously. Of course, acquiring this ability is not entirely at all difficult. Known as perceptual switching, this ability to interpret conversations that bear conflicting languages in a selective manner is ever present in your child.

Come, Look-see, Hear-see. When would your child develop this ability, then? In 2016, a study conducted by Singh and Quan sought to identify this gap. Children aged 3 or 4 years were taught two new two-syllable words with a matching object, through a puppet show in English and Mandarin. They were later presented with the same object, either with the word they learned, or the same word just with the tone changed in a language-specific sentence. Interestingly, what they found was that there was a distinction in the way children would discern these words. While children aged 3 years could not identify the difference between what they learned and what was wrongly presented, those aged 4 years could do so using tone as a language cue!

Can your child recognize the difference between correct and incorrectly pronounced words?
Photo by Iana Dmytrenko on Unsplash

Okay, so what leh? Perhaps, findings from Singh and Quan’s study could inform us on how we may use our language around our children at home. With so many languages in Singapore, we may be concerned that our children may not learn effectively with the mixing of languages in their environments. However, this may not hold true as they grow. Harnessing the ability to understand different languages in a mixed sentence is not something to be frowned upon – rather, this may just be the start of a cascade of benefits just waiting to blossom in the long run. 

Reference:
Singh, L., & Quam, C. (2016). Can bilingual children turn one language off? Evidence from perceptual switching. Journal of Experimental Child Psychology, 147, 111-125. https://doi.org/10.1016/j.jecp.2016.03.006 (~20min read)

This blog post was written by our intern Cameron and edited by our lab manager Fei Ting.

What do you call the people around you? Do you call strangers ‘uncle’ and ‘auntie’? Do you use these same words for family members? We’re interested to find out more about these ‘kinship terms’ used in Singapore! Participate in our online study herehttps://ntusingapore.qualtrics.com/jfe/form/SV_6WgBjxXcjSM3IvI

Multilingual Memories: Different languages, different souls

Unlike many Chinese children, my first language is not Mandarin, or at least standard Mandarin. I was mainly exposed to the Southwestern dialect of Mandarin before going to school, since my grandparents stayed with me and  spoke the most to me at that time. Though classified as a dialect of Mandarin, speakers of two different dialects can hardly understand each other due to differences in tones and native vocabulary. Even after I have learned and was “forced” to use standard Mandarin in school, I still used dialect in my family and with most of my friends, and the “vulgar” tones (in comparison to the Chinese government’s  standardization of Mandarin) always gave me a sense of warmth.

Figure from: Varieties of Mandarin. Wurm, Stephen Adolphe; Li, Rong; Baumann, Theo; Lee, Mei W. (1987), Language Atlas of China, Longman, ISBN 978-962-359-085-3.

My first experience with English and Cantonese were all related to music. My grandma loves music and used to play and sing songs to me ever since I was born. My mom would drive me to school, giving me an hour a day of music. Most of them were in Cantonese or English, making my taste of music unique compared to my peers. Now when I think back on it, I feel the lack of Mandarin or other dialects in the music list was probably a conscious decision by my parents. I did fall in love with those songs, anyway, and I tried hard to find lyrics or transcribe them using the phonetic alphabet, which largely helped with my pronunciation.

While my emotional memories about English are from music, the rational part comes from a few novels. I began to read some dystopian novels since I was 12. Maybe because English is not my mother tongue, I could not feel much emotional energy from pure English literature, compared to Chinese. Thus, the more I dove into it, the more I found that I could be a bystander while reading in English. With little sentimental impact from the words, I could evaluate pros and cons of people’s behaviors or government’s policies.

I chose to take German as one of my electives after I had some basic knowledge in linguistics. I was fascinated by etymology, either the connections among Chinese, Korean and Japanese, or the relationships in various Indo-European languages. It was a happy and fruitful time, but I gradually realized that English is probably the easiest Indo-European language to learn. With no grammatical gender and much fewer tenses, English may now be the freest western language since you can hardly be wrong regardless of what you have said.

This post was written by our intern, Zhixing. Zhixing is a 3rd Year student majoring in Psychology and Biological sciences. He speaks English and Mandarin, can understand Cantonese, and is learning German!

Multilingual Memories is a collection of stories about our experiences learning language growing up as a bi- or multilingual! Childhood is when most of us start learning languages, and we think that it would be fun to reminisce about those memories together. Want to read more Multilingual Memories? Click here!

Lexical semantic network in bilinguals

Babies are fast learners. Whether babies are raised as monolinguals or bilinguals, they seem to progress at the same pace in learning new words and understanding their meanings. By the age of 24 months, most babies are able to produce simple word combinations like “mommy bye-bye”, give responses or at least understand simple commands such as “No, no, cannot eat.” However, given that bilingual babies are exposed to two languages, one of which may be more familiar and one less familiar, researchers often wonder how babies organise words in their brain networks. How do babies learn the meaning of the same word across languages? Do they link words of the same meaning in different languages together?

Previously, a priming task by Von Holzen and Mani (2012) had revealed that German-English bilingual toddlers of 21 and 43 months showed rhyming associations between their second language (L2, English) and first language (L1, German). That is, a task where babies are more able to recognize a target word in German (L1) when preceded by a rhyming prime word in English (L2). For example, the participants are more able to recognise the German target word stein ‘stone’ when they were shown the English prime word leg right before it. The rationale is that the word leg, called bein in German, sounds similar to stein and would be activated in participants’ minds, so it would help facilitate participant’s subsequent recognition when they saw the word stein.

Besides doing research on infants’ word association of different languages, there are also studies that talk about the lexical semantic network, which is where meanings of words are linked together (see Figure below).

Semantic Network (Retrieved from Wikipedia ‘Semantic networks’, https://commons.wikimedia.org/w/index.php?curid=1353062)

For instance, other than the rhyme of bein and stein which can trigger a link, researchers wonder whether bilingual infants are able to exhibit priming when children need to think about the meanings of the words (not just the sounds) at the age of 18 and 24 months, and whether their vocabulary knowledge will affect how quickly they process the information. In other words, are 18- and 24-months bilingual children able to activate the meaning and understanding of both target and prime words when both words are presented in different languages? Also, would a bigger or smaller vocabulary size of the two languages affect the speed that children do this task? 

In a study done by De Anda and Friend (2020), thirty-two English- and Spanish-learning bilingual toddlers were recruited. Three types of tests were used in their study:

  1. Computerized Comprehension Test (CCT)

A CCT is a test in which the infants were prompted to touch images on a touch-sensitive monitor. Infants were escorted to a dimly lit room with a caregiver. Two images (a target; a distractor) were presented simultaneously on the left and right monitor. The experimenters engaged the infants’ attention by speaking to them in infant-directed speech (speech that is exaggerated to keep infants interested). Infants had seven seconds to choose the correct image. For each trial that the infant failed to answer, the experimenter would touch the screen for them.

  1. Intermodal Preferential Looking (IPL) Priming Task

In an IPL task, the experimenters first caught the infants’ attention with a spinning wheel at the beginning of the task. At the same time, infants also heard a sentence with a prime word that was either semantically related or unrelated to the target word.

After a small pause, the target word was played on its own. Finally, the target word was played at the same time as a distractor word. 

For example, a target word “apple” was presented for 200ms. In this case, the prime word used was “banana”, which belongs to the category of ‘fruit’, just like the target word “apple”.

The experiment consisted of four blocks: Spanish prime words to Spanish targets (Spanish-Spanish), Spanish prime words to English targets (Spanish-English), English prime words to English targets (English-English), and English primes to Spanish targets (English-Spanish). Each block consisted of three related trials (e.g., ‘apple’ in both languages) and three unrelated trials (‘apple’ in one language, ‘car’ in another language).

  1. The MacArthur Bates Communicative Development Inventory (MCDI)

The MCDI is a parent report measure on a child’s early language. This is a checklist which parents get to indicate the words their child uses and understands. The Spanish version, IDHC (DEFINE ), was also used in this experiment. The MCDI consists of two additional measures, total conceptual vocabulary (total vocabulary size minus the translation equivalents) and translation equivalents (synonyms across two languages, e.g., ‘apple’ in English and manzana ‘apple’ in Spanish). 

Findings

  1. Children’s vocabulary size in each language does not affect their speed in this task, no matter whether the prime and target words were in the same or different languages. 
  2. Children with more translation equivalents across languages showed greater differences in looking time to semantically unrelated and related trials when the prime and target words were in the same language.
  3. Children were more likely to learn a translation equivalent if the word was first learnt in the more-familiar language.
  4. Bilingual children tend to use their knowledge from the other more-familiar language to support the less-familiar language. 

All in all, important language milestones seem to appear at the same time for both monolinguals and bilinguals. However, due to the dual language exposure, there are some complex links between vocabulary knowledge and the process of understanding the meanings of words in bilingual children’s language development.

Reference

De Anda, S. & Friend, M., (2020). Lexical-Semantic Development in Bilingual Toddlers at 18 and 24 Months. Frontiers in Psychology, 11, https://doi.org/10.3389/fpsyg.2020.508363 

This post was written by our intern, Hong Ern, and edited by our Research Fellow Rui Qi.