The Language Of Music Essay Conclusion

Music is a universal language. Or so musicians like to claim. “With music,” they’ll say, “you can communicate across cultural and linguistic boundaries in ways that you can’t with ordinary languages like English or French.”

On one level, this statement is obviously true. You don’t have to speak French to enjoy a composition by Debussy. But is music really a universal language? That depends on what you mean by “universal” and what you mean by “language.”

Every human culture has music, just as each has language. So it’s true that music is a universal feature of the human experience. At the same time, both music and linguistic systems vary widely from culture to culture. In fact, unfamiliar musical systems may not even sound like music. I’ve overheard Western-trained music scholars dismiss Javanese gamelan as “clanging pots” and traditional Chinese opera as “cackling hens.”

Nevertheless, studies show that people are pretty good at detecting the emotions conveyed in unfamiliar music idioms—that is, at least the two basic emotions of happiness and sadness. Specific features of melody contribute to the expression of emotion in music. Higher pitch, more fluctuations in pitch and rhythm, and faster tempo convey happiness, while the opposite conveys sadness.

Perhaps then we have an innate musical sense. But language also has melody—which linguists call prosody. Exactly these same features—pitch, rhythm, and tempo—are used to convey emotion in speech, in a way that appears to be universal across languages.

Listen in on a conversation in French or Japanese or some other language you don’t speak. You won’t understand the content, but you will understand the shifting emotional states of the speakers. She’s upset, and he’s getting defensive. Now she’s really angry, and he’s backing off. He pleads with her, but she doesn’t buy it. He starts sweet-talking her, and she resists at first but slowly gives in. Now they’re apologizing and making up….

We understand this exchange in a foreign language because we know what it sounds like in our own language. Likewise, when we listen to a piece of music, either from our culture or from another, we infer emotion on the basis of melodic cues that mimic universal prosodic cues. In this sense, music truly is a universal system for communicating emotion.

But is music a kind of language? Again, we have to define our terms. In everyday life, we often use “language” to mean “communication system.” Biologists talk about the “language of bees,” which is a way to tell hive mates about the location of a new source of nectar.

Florists talk about the “language of flowers,” through which their customers can express their relationship intentions. “Red roses mean…. Pink carnations mean… Yellow daffodils mean…” (I’m not a florist, so I don’t speak flower.)

And then there’s “body language.” By this we mean the postures, gestures, movements and facial expressions we use to convey emotions, social status, and so on. Although we often use body language when we speak, linguists don’t consider it a true form of language. Instead, it’s a communication system, just as are the so-called languages of bees and flowers.

By definition, language is a communication system consisting of (1) a set of meaningful symbols (words) and (2) a set of rules for combining those symbols (syntax) into larger meaningful units (sentences). While many species have communication systems, none of these count as language because they lack one or the other component.

The alarm and food calls of many species consist of a set of meaningful symbols, but they lack rules for combining those symbols. Likewise, bird song and whale song have rules for combining elements, but these elements aren’t meaningful symbols. Only the song as a whole has meaning—“Hey ladies, I’m hot,” and “Hey other guys, stay away!”

Like language, music has syntax—rules for ordering elements—such as notes, chords, and intervals—into complex structures. Yet none of these elements has meaning on its own. Rather, it’s the larger structure—the melody—that conveys emotional meaning. And it does that by mimicking the prosody of speech.

Since music and language share features in common, it’s not surprising that many of the brain areas that process language also process music. But this doesn’t mean that music is language. Part of the misunderstanding comes from the way we tend to think about specific areas of the brain as having specific functions. Any complex behavior, whether language or music or driving a car, will recruit contributions from many different brain areas.

Music certainly isn’t a universal language in the sense that you could use it to express any thought to any person on the planet. But music does have the power to evoke deep primal feelings at the core of the shared human experience. It not only crosses cultures, it also reaches deep into our evolutionary past. And it that sense, music truly is a universal language.

References

Patel, A. D. (2008). Music, language, and the brain. Oxford, UK: Oxford University Press.

Slevc, L. R., Okada, B. M. (2015). Processing structure in language and music: a case for shared reliance on cognitive control. Psychonomic Bulletin & Review, 22, 637-652.

Tan, S.-L., Pfordresher, P., & Harré, R. (2010). Psychology of music: from sound to significance. New York, NY: Psychology Press.

David Ludden is the author of The Psychology of Language: An Integrated Approach (SAGE Publications).

Traditionally, music and language have been treated as different psychological faculties. This duality is reflected in older theories about the lateralization of speech and music in that speech functions were thought to be localized in the left and music functions in the right-hemisphere of the brain. For example, the landmark paper of Bever and Chiarello (1974) emphasized the different roles of both hemispheres in processing music and language information, with the left hemisphere considered more specialized for propositional, analytic, and serial processing and the right-hemisphere more specialized for appositional, holistic, and synthetic relations. This view has been challenged in recent years mainly because of the advent of modern brain imaging techniques and the improvement in neurophysiological measures to investigate brain functions. Using these innovative approaches, an entirely new view on the neural and psychological underpinnings of music and speech has evolved. The findings of these more recent studies show that music and speech functions have many aspects in common and that several neural modules are similarly involved in speech and music (Tallal and Gaab, 2006). There is also emerging evidence that speech functions can benefit from music functions and vice versa. This field of research has accumulated a lot of new information and it is therefore timely to bring together the work of those researchers who have been most visible, productive, and inspiring in this field.

This special issue comprises a collection of 20 review and research papers that focus on the specific relationship between music and language. Of these 20 papers 12 are research papers that report entirely new findings supporting the close relationship between music and language functions. Two papers report findings demonstrating that phonological awareness, which is pivotal for reading and writing skills, is closely related to pitch awareness and musical expertise (Dege and Schwarzer, 2011; Loui et al., 2011). Dege and colleagues even show that pre-schoolers can benefit from a program of musical training to increase their phonological awareness.

Three research papers focus on the relationship between tonal language expertise and musical pitch perception skills and on whether pitch-processing deficits might influence tonal language perception. Giuliano et al. (2011) demonstrated Mandarin speakers are highly sensitive to small pitch changes and interval distances, a sensitivity that was absent in the control group. Using ERPs obtained during the pitch and interval perception tasks, their study reveals earlier ERP responses in Mandarin speakers compared with controls to these pitch changes relative to no-change trials. In their elegant paper, Peretz et al. (2011) report that native speakers of a tone language, in which pitch contributes to word meaning, are impaired in the discrimination of falling pitches in tone sequences as compared with speakers of a non-tone language. Taken together, these two studies illustrate the cross-domain influence of language experience on the perception of pitch, suggesting that the native use of tonal pitch contours in language leads to a general enhancement in the acuity of pitch representations. Tillmann et al. (2011) examined whether subjects suffering from congenital amusia also demonstrate impairments of pitch-processing in speech, specifically the pitch changes used to contrast lexical tones in tonal languages. Their study revealed that the performance of congenital amusics was inferior to that of controls for all materials including the Mandarin language, this therefore suggesting a domain-general pitch-processing deficit.

Five research papers sought to examine interactions either between musical expertise and language functions or whether an interaction between musical and language functions is beneficial for phonetic perception. Ott et al. (2011) demonstrate that professional musicians process unvoiced stimuli (irrespective of whether these stimuli are speech or non-speech stimuli) differently than controls, this suggesting that early phonetic processing is differently organized depending on musical expertise. Strait and Kraus (2011) report perceptual advantages in musicians for hearing and neural encoding of speech in background noise. They also argue that musicians possess a neural proficiency for selectively engaging and sustaining auditory attention to language and that music thus represents a potential benefit for auditory training. Gordon et al. (2011) examined the interaction between linguistic stress and musical meter and established that alignment of linguistic stress and musical meter in song enhances musical beat tracking and comprehension of lyrics. Their study thus supports the notion of a strong relationship between linguistic and musical rhythm in songs. Hoch et al. (2011) investigated the effect of a musical chord's tonal function on syntactic and semantic processing and conclude that neural and psychological resources of music and language processing strongly overlap. The fifth paper of this group (Omigie and Stewart, 2011) demonstrates that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors. Although there are still some differences between music and speech-processing, there thus is growing evidence that speech and music processing strongly overlap.

Halwani et al. (2011) examined whether the arcuate fasciculus, a prominent white-matter tract connecting temporal and frontal brain regions, is anatomically different between singers, instrumentalists, and non-musicians. They showed that long-term vocal–motor training might lead to an increase in volume and microstructural complexity (as indexed by fractional anisotropy measures) of the arcuate fasciculus in singers. Most likely, these anatomical changes reflect the necessity in singers of strongly linking together frontal and temporal brain regions. Typically, these regions are also involved in the control of many speech functions. The beneficial impact of music on speech functions has also been demonstrated by Vines et al. (2011) in their research paper. They examined whether the melodic intonation therapy (MIT) in Broca's aphasics can be improved by simultaneously applying anodal transcranial direct current stimulation (tDCS). In fact, they showed that the combination of right-hemisphere anodal-tDCS with MIT speeded up recovery from post-stroke aphasia.

In addition to these 12 research papers there are 8 review and opinion papers that highlight the tight link between music and language. Patel (2011) proposes the so-called OPERA hypothesis with which he explains why music is beneficial for many language functions. The acronym OPERA stands for five conditions which might drive plasticity in speech-processing networks (Overlap: anatomical overlap in the brain networks that process acoustic features used in both music and speech; Precision: music places higher demands on these shared networks than does speech; Emotion: the musical activities that engage this network elicit strong positive emotion; Repetition: the musical activities that engage this network are frequently repeated; Attention: the musical activities that engage this network are associated with focused attention). According to the OPERA hypothesis, when these conditions are met, neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. While Patel's paper is more an opinion paper that puts musical expertise into a broader context, the seven other reviews more or less emphasize specific aspects of the current literature on music and language. Ettlinger et al. (2011) emphasize the specific role of implicitly acquired knowledge, implicit memory, and their associated neural structures in the acquisition of linguistic or musical grammar. Milovanov and Tervaniemi (2011) underscore the beneficial influence of musical aptitude on the acquisition linguistic skills as for example in acquiring a second language. Bella et al. (2011) summarize findings of the existing literature concerning normal singing and poor-pitch singing and suggest that pitch imitation may be selectively inaccurate in the music domain without being affected in speech, thus supporting the separability of mechanisms subserving pitch production in music and language. In their extensive review of the literature, Besson et al. (2011) discuss the transfer effects from music to speech by specifically focusing on the musical expertise in musicians. Shahin (2011) article reviews neurophysiological evidence supporting an influence of musical training on speech perception at the sensory level, and the question is discussed whether such transfer could facilitate speech perception in individuals with hearing loss. This review also explains the basic neurophysiological measures used in the neurophysiological studies of speech and music perception. The comprehensive review by Koelsch (2011) summarizes findings from neurophysiology and brain imaging on music and language processing and integrates these findings into a broader “neurocognitive model of music perception.” Specific emphasis is placed on the comparison of musical syntax and their similarities and differences to language syntax. Schon and Francois (2011) present a review in which they focus on a series of electrophysiological studies that investigated speech segmentation and the extraction of linguistic versus musical information. They demonstrated that musical expertise facilitates the learning of both linguistic and musical structures. A further point is that electrophysiological measures are often more sensitive for identifying music-related differences than behavioral measures.

Taken together, this special issue provides a comprehensive summary of the current knowledge on the tight relationship between music and language functions. Thus, musical training may aid in the prevention, rehabilitation, and remediation of a wide range of language, listening, and learning impairments. On the other hand, this body of evidence might shed new light on how the human brain uses shared network capabilities to generate and control different functions.

References

  • Bella S. D., Berkowska M., Sowinski J. (2011). Disorders of pitch production in tone deafness. Front. Psychol.2:164.10.3389/fpsyg.2011.00164 [PMC free article][PubMed][Cross Ref]
  • Besson M., Chobert J., Marie C. (2011). Transfer of training between music and speech: common processing, attention, and memory. Front. Psychol.2:94.10.3389/fpsyg.2011.00094 [PMC free article][PubMed][Cross Ref]
  • Bever T. G., Chiarello R. J. (1974). Cerebral dominance in musicians and nonmusicians. Science185, 537–53910.1126/science.185.4146.99-b [PubMed][Cross Ref]
  • Dege F., Schwarzer G. (2011). The effect of a music program on phonological awareness in preschoolers. Front. Psychol.2:124.10.3389/fpsyg.2011.00124 [PMC free article][PubMed][Cross Ref]
  • Ettlinger M., Margulis E. H., Wong P. C. (2011). Implicit memory in music and language. Front. Psychol.2:211.10.3389/fpsyg.2011.00211 [PMC free article][PubMed][Cross Ref]
  • Giuliano R. J., Pfordresher P. Q., Stanley E. M., Narayana S., Wicha N. Y. (2011). Native experience with a tone language enhances pitch discrimination and the timing of neural responses to pitch change. Front. Psychol.2:146.10.3389/fpsyg.2011.00146 [PMC free article][PubMed][Cross Ref]
  • Gordon R. L., Magne C. L., Large E. W. (2011). EEG Correlates of Song Prosody: a new look at the relationship between linguistic and musical rhythm. Front. Psychol.2:352.10.3389/fpsyg.2011.00352 [PMC free article][PubMed][Cross Ref]
  • Halwani G. F., Loui P., Ruber T., Schlaug G. (2011). Effects of practice and experience on the arcuate fasciculus: comparing singers, instrumentalists, and non-musicians. Front. Psychol.2:156.10.3389/fpsyg.2011.00156 [PMC free article][PubMed][Cross Ref]
  • Hoch L., Poulin-Charronnat B., Tillmann B. (2011). The influence of task-irrelevant music on language processing: syntactic and semantic structures. Front. Psychol.2:112.10.3389/fpsyg.2011.00112 [PMC free article][PubMed][Cross Ref]
  • Koelsch S. (2011). Toward a neural basis of music perception – a review and updated model. Front. Psychol.2:110.10.3389/fpsyg.2011.00058 [PMC free article][PubMed][Cross Ref]
  • Loui P., Kroog K., Zuk J., Winner E., Schlaug G. (2011). Relating pitch awareness to phonemic awareness in children: implications for tone-deafness and dyslexia. Front. Psychol.2:111.10.3389/fpsyg.2011.00111 [PMC free article][PubMed][Cross Ref]
  • Milovanov R., Tervaniemi M. (2011). The interplay between musical and linguistic aptitudes: a review. Front. Psychol.2:321.10.3389/fpsyg.2011.00321 [PMC free article][PubMed][Cross Ref]
  • Omigie D., Stewart L. (2011). Preserved statistical learning of tonal and linguistic material in congenital amusia. Front. Psychol.2:109.10.3389/fpsyg.2011.00109 [PMC free article][PubMed][Cross Ref]
  • Ott C. G., Langer N., Oechslin M., Meyer M., Jancke L. (2011). Processing of voiced and unvoiced acoustic stimuli in musicians. Front. Psychol.2:195.10.3389/fpsyg.2011.00195 [PMC free article][PubMed][Cross Ref]
  • Patel A. D. (2011). Why would musical training benefit the neural encoding of speech? The OPERA hypothesis. Front. Psychol.2:142.10.3389/fpsyg.2011.00142 [PMC free article][PubMed][Cross Ref]
  • Peretz I., Nguyen S., Cummings S. (2011). Tone language fluency impairs pitch discrimination. Front. Psychol.2:145.10.3389/fpsyg.2011.00145 [PMC free article][PubMed][Cross Ref]
  • Schon D., Francois C. (2011). Musical expertise and statistical learning of musical and linguistic structures. Front. Psychol.2:167.10.3389/fpsyg.2011.00167 [PMC free article][PubMed][Cross Ref]
  • Shahin A. J. (2011). Neurophysiological influence of musical training on speech perception. Front. Psychol.2:126.10.3389/fpsyg.2011.00126 [PMC free article][PubMed][Cross Ref]
  • Strait D. L., Kraus N. (2011). Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise. Front. Psychol.2:113.10.3389/fpsyg.2011.00113 [PMC free article][PubMed][Cross Ref]
  • Tallal P., Gaab N. (2006). Dynamic auditory processing, musical experience and language development. Trends Neurosci.29, 382–39010.1016/j.tins.2006.06.003 [PubMed][Cross Ref]
  • Tillmann B., Burnham D., Nguyen S., Grimault N., Gosselin N., Peretz I. (2011). Congenital amusia (or tone-deafness) interferes with pitch processing in tone languages. Front. Psychol.2:120.10.3389/fpsyg.2011.00120 [PMC free article][PubMed][Cross Ref]
  • Vines B. W., Norton A. C., Schlaug G. (2011). Non-invasive brain stimulation enhances the effects of melodic intonation therapy. Front. Psychol.2:230.10.3389/fpsyg.2011.00230 [PMC free article][PubMed][Cross Ref]

0 thoughts on “The Language Of Music Essay Conclusion

Leave a Reply

Your email address will not be published. Required fields are marked *