Language + Recitation Articles
Language + Recitation Articles PSYCH UA-25
Popular in Cognitive Neuroscience
Popular in Psychlogy
This 8 page Class Notes was uploaded by Brianna René on Wednesday May 4, 2016. The Class Notes belongs to PSYCH UA-25 at New York University taught by Clay Curtis in Winter 2016. Since its upload, it has received 45 views. For similar materials see Cognitive Neuroscience in Psychlogy at New York University.
Reviews for Language + Recitation Articles
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 05/04/16
3.5. Language Anomia: The inability to find the words to label things in the world. Patient H.M’s was able to speak, read and understand language. His only problem was not being able to name things, nouns being the worst. People with anomia are aware of their deficit and they use correct grammatical structures as well as pantomiming to get their point across. Tip-of-the-Tongue Phenomenon: Object knowledge is NOT the same as its label. That's why when someone says different names you know which ones definitely aren't the one you are looking for. In the same way, the ability to produce speech is NOT the same as the ability to understand language. (The the two pathways run in opposite directions of each other) Dysarthria: Difficulty controlling muscles used in speech Apraxia: Impairment of motor PLANNING and speech articulation. (Ataxia is the inability to initiate muscle movements towards what is being looked at) Aphasia: Deficit in language production or comprehension. Left Peri-Sylvian Language Area = Broca’s Area (Inferior Frontal Gyrus) + Wernicke’s Area (Superior Temporal Gyrus) A left hemisphere network involving the frontal, parietal, and temporal lobes is especially critical for language production and comprehension. Broca’s Aphasia (Anterior Aphasia): A disorder in speech production that Broca’s Aphasics are aware of. They also have problems with syntax (Rules that govern how words go together in a sentence). They can understand very simple grammatical sentences but not much more complex than that. ex “The boy kicked the girl” vs. “The boy was kicked by the girl” Speech patterns are often slow, deliberate and lack function words They cannot repeat words that are said to them with much efficacy because they have trouble controlling speech muscles (Dysarthria) And they have a hard time understanding reversible sentences (They're typically more grammatically complex). Wernicke’s Aphasia (Posterior Aphasia): A language comprehension disorder. Patients have difficulty understanding spoken or written language. Sometimes they cannot understand language at all. Speech is fluent with normal grammar and prosody, but it makes no damn sense. It has been proven however that the full effect of Wernicke’s Aphasia is achieved if and only if Wernicke’s area is damaged along with the surrounding matter in the posterior temporal lobe OR underlying white matter tracts that connects temporal language areas to the rest of the brain. Lesions to JUST Wernicke’s area only causes temporary Aphasia and comprehension improves as swelling reduces. Conduction Aphasia: Conduction Aphasics can understand words they hear or see and they speak words but cannot correct their own errors because there is no information sent from Broca’s Area to Wernicke’s Area. Conduction Aphasia occurs due to damage to the Articulate Fasciculus Articulate Fasciculus: The white matter tract that connects Broca’s Area to Wernicke’s Area. Conduction Aphasics they know they have made an error, but they cannot rectify it. Global Aphasia: A devastating disorder in which a patient would not be able to produce or comprehend language. This would result from extensive left hemisphere damage; pretty much decimating Broca’s & Wernicke’s Areas and everywhere in between. Classical Model of Language [DEFUNCT]: Specific brain regions performed specific tasks such as language comprehension and production. Lichtheim proposed that word storage (Wernicke’s Area), speech planning (Broca’s Area), and conceptual information stores are located in separate brain regions. It assumes that Broca’s/Wernicke’s aphasia is only present respective to the afflicted areas themselves and that's INCORRECT. Language emerges from a network of brain regions. It involves: Phonology: Speech sounds and their mental representation; How sounds are put together into words. phonemes Phonology is disrupted in both posterior and anterior aphasias, however the ability to produce the correct sound for a given phoneme results more from ANTERIOR damage. Orthography: Knowing how letters are put together into words; visual representation of words Morphology: Words & word structure Semantics: Word and sentencemeaning Syntax: Sentence structure Mental Lexicon: A mental store of information about words which includes semantic and syntactic information, as well as the details of the word forms. Once we perceptually analyze words, it is hypothesized that three general functions occur. Lexical Access: refers to the stage of processing where the result of prior perceptual analysis activates word-form representations in the Lexicon. Lexical Selection: Refers to the selection of the best representation that matches that word- form input Lexical Integration: Words are integrated into a full sentence or larger context. Grammar and Syntax are the rules which lexical items are organized. Our mental lexicon must be super- efficient since we are able to speak so quickly. The mental lexicon is not organized alphabetically (If it were we wouldn't be able to speak as quickly as we do) The first organizational unit in the lexicon is the Morpheme. It is also the smallest meaningful unit of language. The second organizational unit is that words that are used more are accessed more quickly than less frequently used words. A third organizing principle is a Phoneme, which is the smallest unit of sound that makes a difference to a meaning. A fourth organizing factor is the semantic relationships between words. The organization of the Mental Lexicon (as supported by a Semantic Priming Test) indicates that words of similar meaning are grouped together, so that when one word is activated the other is too and the brain must decide which word is most appropriate. It also makes words easier to follow when a word before it primes its meaning. ex. car primes the word truck. This is the Neighborhood Effect. ‘Auditory neighborhoods’ consists of words that differ by one phoneme. ex. cat, hat, sat. These words are identified more slowly because they have so many neighbors. Semantically similar words prime each other. Semantic Dementia is the loss of semantic (word meaning) memory after anterior lobe damage. The anterior temporal lobes are involved with storing concept information. Patients with Semantic Dementia often have trouble assigning objects to a specific semantic category. When prompted with a picture of a dog they will say animal instead. This provides some evidence for a semantic network because related meanings are substituted, confused or lumped together which is what we would expect of a network of interconnected nodes within the mental lexicon. ERP signatures of semantic vs. syntactic violations The P600 wave is involved with noticing syntactic violations. Understanding Speech Listeners have to determine important speech from just noise and divide the speech stream into meaningful units. Infants have the ability to distinguish between any phoneme in the first year of their life. however their senses became tuned to the language that they were experiencing. The sounds they make become more and more similar to the phonemes they hear and by the time they are 1, they no longer produce non-native phonemes. Segmentation Problem Coarticulation= No silences between words; two words can be slurred together. Segmentation= how we differentiate sounds into separate words. An important key to understanding speech is prosodic information. It’s easy to tell the rhythm of speech especially when a question is asked or when emphasis is being made. The Superior Temporal Gyrus is important to sound perception. Pure Word Deafness is difficulty in understanding speech sounds. Hearing is intact, but individuals cannot hear speech. This is purely auditory and they can still read books and what not. Heschl’s Gyri: activated by both speech and non-speech sounds. Associated with hearing sound in general. But where do we distinguish speech from other sounds? Binder fMRI Study (2000) Non-speech sounds Tones Frequencies Speech sounds reversed speech pseudo-words that contained the same letters as the real word “sked” real words “desk” Hierarchical Model of Word Processing First the auditory input moves from Heschl’s Gyri to the STG/STS. (Here no distinction is made between speech and sound.) The final stage of word processing takes place in angular gyrus & temporal pole. Spoken- word recognition processing proceeds anteriorly in the superior temporal gyrus (STG): Phoneme processing appears localized to the left mid- STG, integration of phonemes into words appears localized to the left anterior STG, and processing short phrases appears to be carried out in the most anterior locations of STS. What part of the brain responds to “speech sounds” and not just noise. Sound processing= Primary Auditory Cortex Speech processing= superior temporal sulcus (STS) Written Input Learning to read requires linking arbitrary visual symbols intomeaningful words. Selfridge’s Pandemonium Model of Letter Processing Selfridge envisioned the mind as a collection of tiny “demons.” Each group of demons is assigned to a specific stage in recognition, and within each group the demons work in parallel. Image Demon: Records what is seen Feature Demons: Each feature demon represents a specific feature, such as a curved line or straight line. Each demon must “yell” if there is a feature seen that it corresponds to. (Demons do not represent single neurons but a cluster of neurons that fire for the same thing). Cognitive Demons: The demons watch the feature demons and are hyped up by their yells. Each cognitive demon is responsible for a specific pattern (Alphabet letter).The more features that correspond to their pattern, the louder they yell (“DAS ME”). Decision Demons: The decision demons represent the final stage in processing and they select the “loudest” cognitive demon, which becomes our conscious perception. Pandemonium simply represents the collective “yelling” of the system Yelling represents neuronal firing. Connectionist Network for Letter Recognition (McClelland & Rumelhart)[BETTER] 3 layers of representation: 1. A layer for the features of letters. 2. A layer for letters 3. A layer for the representation of words. This model permits top-down processing as opposed to Selfridge’s model which implies bottom- up processing all the time. Top down information of the words can activate or inhibit letter activations, thus helping the recognition of letters. This indicates that words are not processed letter by letter. (Word Superiority Effect) Also, processes can take place in parallel where in Selfridge’s model it occurs serially. Alexia: Patients cannot read written words, even if other language aspects are normal. Written word processing takes place in the left occipito-temporal lobe. Damage to this area causes pure Alexia. Puce et. al. (1996): Activation between faces, strings of letters and textures were compared. Letters elicited greater activation for left occipito-temporal lobe. (WORD AREA) **BOTH SPOKEN AND WRITTEN WORDS UNDERGO INDIVIDUAL PROCESSING BEFORE THEY ARE ACCESSED BY THE MENTAL LEXICON. FROM THERE THEY ARE PROCESSED THE SAME WAY** **Left MTG and STG are important for the translation of speech sounds to word meanings.** The Role of Context in Word Recognition Syntactic & Semantic information must be integrated Contextual representations are important in determining the grammatical form that the word should be used in. 3 Classes of Models to Explain Word Comprehension 1. Modular Models: The flow is strictly bottom-up. Language comprehension processes take place in separate, distinct areas.(Selfridge’s model) 2. Interactive Models: All types of information can contribute to word processing. In this model, context can have an effect before sensory information is available by influencing the computation of the final analysis in the mental lexicon. (McClelland’s model) 3. Hybrid Models: lexical access is autonomous and not influenced by higher level information, but that lexical selection can be influenced by sensory and higher level contextual information. Semantic processing & N400 wave Negative polarity voltage peak; amplitude is reached apx 400ms after the stimulus is introduced. When anomalous words are presented in a sentence there is a large N400 response. Syntactic processing & P600 wave (Syntactic Positive Shift) Noticing syntactic violations; amplitude is reached apx 600ms after the stimulus is introduced. Stimulus is incongruent with expected syntactic structure. Syntactic Parsing: The brain does not store sentences-there are too many for that to be feasible. Syntactic parsing does not rely on the retrieval of sentence representations. Syntactic Processing in the LIFC (Left Inferior Frontal Cortex) Caplan et al.(2000): LIFC activation for sentences with more complex syntax structures. Comprehension vs. Production of Language Comprehension: A stimulus is received, and then it will go up for processing Production: First the meaning, then the word to prescribe that meaning to, then how to write/ say the words you want. Its the same pathway only going in opposite directions. Levelt’s Model of Speech Production The first step to producing speech is knowing what you wanna say. Levelt says that there are two key stages of preparation. Macro-planning: The intention of the communication represented by goals and subgoals. Microplanning: Adopting a perspective to propose how the information is expressed. The micro-plan determines word choice and the grammatical roles that words play. Each stage in Levelt’s model for language production occurs serially. Its output representation is used for input to the next stage. This avoids feedback, loops, parallel processing, and cascades, and it fits well with the findings of ERPs recorded intracranially. Can animals learn language? Animal calls can carry meaning and show evidence of rudimentary syntax. In general, however, animal calls tend to be inflexible, associated with a specific emotional state, and linked to a specific stimulus. Many researchers suggest that language evolved from hand gestures, or a combination of hand gestures and facial movement. Areas that control hand movement and vocalizations are closely located in homologous structures in monkeys and humans. 3.7. Recitation Articles DeLong Article (2005) The big question is “Why are we so good a processing language?” The smaller question is that they're trying to test: “Do we predict upcoming words or do we integrate them as we see them?” Language Prediction vs. Integration Do we use the context of the language in order to make a prediction or are expectations depended on words themselves and how they fit into sentences? Prediction The boy goes outside to fly a...kite! The brain predicts the word kite and when it sees its prediction is correct it just keeps chillin. (does processing happen before the word “kite”) Integration The boy goes outside to fly a….kite! The brain takes into account each word as it is said, it doesn't make a prediction, but when it hears the word kite, it recognizes that it fits into the sentence and accepts that the sentence makes sense. (or does it happen after the word “kite”) N400 Negative response that peaks around 400 ms when there is a semantic error (meaning of the word being processed). The brain being surprised at something in language. Its inversely correlated with the sense of the word, so when the sentence makes sense, the N400 does not peak. However the N400 could either reflect difficulty of integration or of violated expectation. because we are measuring after the incongruent word So how do we know? We gotta see if there is an anticipatory BAM! Cloze Probability: Asking people to continue a phrase a sentence, & then see how many people gave the same continuation (the percentage of people) Ex. Stainless Steel What other word follows stainless? Not many at all. Cloze probability for steel is 100%. This word is very predictable and easy to integrate into the context of “Stainless” They found out the cloze probability for the sentences from a different group than those tested Ex. Short Sighted Term people hand Sleeves. Eventually in a large sample of people some of these words would be repeated. Let’s say that each word has a cloze probability of 20% Objectives/Goals of Experiment Does language processing involve prediction? It was hard to distinguish between prediction and integration b/c it happens after the fact. So can we see N400 effects prior to the target word? (That would indicate prediction?) Method Figure out the cloze probability for both the article and the noun. The boy goes outside to fly a….kite! (noun) The boy goes outside to fly….. a kite! or …..an airplane (article) Results A graded effect As cloze probability gets higher, the N400 signal goes down for nouns. We see the same for the article. Conclusion The brain is predicting while reading because there is signal of prediction on the article (a or an) and semantically those articles are the same. Graded effect of N400, which is based only on which noun is gonna follow. Greater N400 for unpredictable nouns. Also it doesn't mean we are consciously predicting words every time. Its just the brain producing signals. McClure Article How to cultural messages combine with content to shape our perceptions to the point of modifying behavioral preferences for a reward like a sugary drink? Are perceptions shaped by brand knowledge? The article investigates neural responses that correlate to non-carbonated versions of pepsi & coke, behavioral choice & brain response to both drinks. These drinks were used because sugary drinks are seen as a reward.