Radio Inspire

How To Learn Sign Language

Language and the brain: Insights from deafness and Sign Language (17 Mar 2015)


Good afternoon, I’d like to welcome
everyone to today’s Lunchtime Lecture. The speaker today
is Dr Maréad MacSweeney. Maréad got her PhD in 1999, and then did a postdoc. at UCL,
before going off to Oregon to work with Helen Neville
for a couple of years. She’s now… ..a Senior Wellcome Trust Fellow, working in the
Institute of Cognitive Neuroscience and is also a co-director of DCAL, the Deafness, Cognition
and Language Research Centre here at UCL. I have known Maréad for a long time and we’ve collaborated
on various projects over many years and she’ll be speaking today
about language in the brain, insights from deafness
and sign language. Thank you. I have a green light.
I think that’s a good sign. Okay, so I will be talking
about language and the brain and the main point
that I want to get over to you today is that when we think about language, and we think about language from the perspective of people who
are born severely or profoundly deaf, and those who use sign language, then it encourages us
to take a much broader perspective of language processing. Up until about 15 years ago, language researchers took a very, what we might call,
audio-centric view of language processing, focusing predominantly
on auditory speech. But now, it is much more accepted,
acknowledged, that we get a lot of useful
and important information about language and communication through the eyes, as well as the ears. For hearing people, language
is better thought of as multimodal. So even as hearing people, we are often
very reliant on visual speech. If you communicate
with friends in a noisy pub, if you can see their faces
it’s a lot easier than if you can’t. We also get a lot of information from
gestures when we are talking to people and we also access
spoken language visually, when it’s written down in text. Because text is a visual representation
of spoken language. When we think about people
who are born profoundly deaf, then we can consider
the special case of sign language. In the short time we’ve got, I’m just going to focus my research
on two of these areas. I will be… Sorry, I’ve got
two screens on the go here. I will be talking about sign language and how we can use sign languages
as a tool, a scientific tool, to gain unique insights
into language more generally. And then I will also be talking
about reading and thinking about
how deaf children learn to read. Because it may come… Thank you. Is that better? Yes. It may come as a surprise, but deaf children
find it difficult to read, because we read
spoken language written down. I’m going to focus
on these two areas of research. Okay. What do we mean
when we talk about sign language? It may come as a surprise to you, that sign languages
are different in different countries. There isn’t one universal sign language
that is used around the world. In this country
we have British Sign Language, BSL. In America
they have American Sign Language, and so on. And this fact is highlighted nicely
on this Sheffield forum website, where Emma said she decided
to try and teach herself sign language, but unfortunately she bought
an American book on Amazon and then she asks
the helpful community online, “Is there any point
in me using this book?” And Nuttygirl
isn’t that nutty after all and says,
“ASL is totally different from BSL.” She’s right,
but the Internet can be harsh and she says “Sell your book
or go and live in America.” So this fact, that sign languages
are different in different countries, often upsets people almost. People say, “How ridiculous! “Shouldn’t deaf people use
the same language around the world?” Well, surely hearing people
should do the same thing, and we don’t. And we don’t do that, because languages
aren’t created in the classroom. Languages develop where there is a pressure for a
community of people to communicate. And in the last 20 years or so, there have been a number of instances
in developing countries where deaf children
have been provided with education, with schools
for the very first time. Deaf children have been
brought together for the first time and linguists have been able to watch
as they have started to communicate and a language has started to develop, and then watch,
as it evolved from older children being passed down to younger children. The next important point
that we have to raise is that sign languages are distinct
from the surrounding spoken language. Many people think that sign languages
are simply a collection of gestures that support the spoken language. But they are not. Just as spoken languages
can have different word orders, so too do British Sign Language
and spoken English have different word and sign orders. Here’s an example. In English, we might say something
like “The cat sat on the bed”, and in BSL, the sign order
would be ‘bed, cat, sat’, which would look like this. So the sign order,
‘bed, cat, sat’, is totally different
to the English word order. So the fact then
that we can analyse sentences and the structure of sentences
in this way, at a syntactic level, supports this last point that sign languages
are natural human languages, that can be analysed by linguists at all of the same linguistic levels
spoken languages can. And… ..all of this information, along with a wealth of relentless
campaigning by the deaf community led the UK government to recognise
British Sign Language in this country, on 18th March 2003, as a language in its own right. So, the aware amongst you will notice that tomorrow
is the 12th anniversary of recognition of BSL
by the government in this country. Okay, so all of this
is very interesting… ..but what does it actually tell us
about the brain? once we’ve established that
signed languages around the world are true languages, are real languages,
just like spoken languages, then we can use sign language as a tool to help tell us more
about language processing in general, and they allow a unique tool
to address this question: Which brain networks are
involved in language processing? So… If we were only to think about
how the brain processes language from the perspective of
spoken language and hearing people, then it’s really difficult to
dissociate, to pull apart, the low-level auditory processing
regions in the brain, from what we’re interested in. The bits of the brain that are really doing
the hard work of linguistic processing. Similarly, if we were only to think
about sign language processing, it would be very difficult
to pull apart the regions that are
interested in visual processing from truly determining the regions that are doing the hard work
of language processing. But by comparing the two, we can gain a much richer insight. So if we can compare how signed and spoken languages
are processed in the brain, then we can identify those regions that are involved in processing
both sign and speech and then we make the argument
that whatever these regions are doing truly is something
to do with language processing. Then we can go further into asking what these regions are actually doing
and where we see differences. Here, where we see differences between
signed and spoken language processing then we can attribute these
to what we call modality, how the languages are received. Whether it’s received
through sound or through vision. Okay. So, in one of our very early studies,
a very long time ago… This was in 2002, so this was even before the UK
government had recognised BSL, we contrasted
hearing people watching English with deaf people watching BSL. What we’ve done here is to have hearing
speakers lying in an FMRI scanner, a brain scanner, and we showed them videos
of a much younger-looking me saying sentences. So they heard me say the sentences
and they saw me say the sentences. This is what we call
audiovisual speech. And we contrasted the activation
in their brain when they saw this with when they just saw
a static image of me on the screen. What did we see? We saw greater activation in the left
hemisphere than in the right hemisphere and greater activation in two regions
that have been well established as being involved
in spoken language processing. So Broca’s area, towards the front
of the brain, here in the frontal lobe, and then,
more towards the back, up here, Wernicke’s area at the junction of the
temporal lobe and the parietal lobe. These two regions,
we’ve known for a long time, are core regions involved in
spoken language processing. Okay. What did we see with the sign language? So, we also then showed
deaf native signers videos of the signer
showing the BSL sentences of the translations of what was
going on in the English videos. Deaf native signers mean that they have
been born to deaf parents who sign and they’ve grown up using BSL
as their native language. So these are the ideal people
to test this question with. However, they are quite rare. Only 5 to 10% of the deaf population actually fall within this category
of being deaf native signers. What did we see? So we see, again, greater activation in the left
hemisphere than the right hemisphere. These regions that we are going to call
the core language regions, Broca’s area and Wernicke’s area, are involved in
sign language processing, just as they are
in spoken language processing. And this degree of overlap is seen
better on this overlap map here, where we are overlaying activation
that was common to both languages, to both British Sign Language
and English. So this core left hemisphere network,
Broca’s area and Wernicke’s area, and then extending regions. Okay, so another way… The data I have just shown you
comes from contrasting two groups. That’s hearing people
listening to English, contrasted with deaf people
watching British Sign Language. Another way is to look at how these languages
are processed in the same brain and the neatest way is to look at
hearing native signers. These are the hearing siblings of those
deaf native signers that I mentioned. They are hearing
but born to deaf parents and they grow up learning,
in this case, American Sign Language, from their deaf parents. This is a study from Karen Emmorey
in the United States, who showed these hearing native signers
videos of people saying English sentences
and American sign language sentences and this is the overlap
between those two language inputs. So again, slightly more going on
in the left hemisphere than the right hemisphere, and these two core regions,
Broca’s area all the way up here and Wernicke’s area, involved in processing both languages. All right. We have seen then… I’ve given you a snapshot of our data and there are lots of studies
from us and from many others, clearly showing
that this core left hemisphere network is involved in language processing and across a range of different tasks, regardless of whether
the language is being delivered through the eyes or the ears. Okay, what about
the flipside of this then? What about the differences? So, going back to that study
I showed you a moment ago, which was the hearing native signers watching both sign language
and English, this image is showing you
where the differences are when those two languages
are contrasted. So, in blue, we have greater activation in
English than in American Sign Language. The blue regions sit
around here on your brain and these regions
involve auditory cortexes. Not surprisingly, when hearing people
are listening to speech, they are showing greater activation
in auditory cortexes than when they are
watching sign language. And greater activation
to sign language than English was found in these huge regions
at the back of the brain on both sides. These regions
are involved in processing vision and in particular
in processing visual motion. So these two findings
aren’t that surprising. There’s a lot more motion
going on in the sign language and these are hearing people, so they are hearing the English,
and not hearing anything in the ASL. Their auditory cortex
is activated for the English. So those differences can be attributed to
low-level processing differences in how the languages are received. But the study also showed
differences higher up in the brain. Up here, greater activation
for sign language than for English, up here in the parietal lobe
on both sides. And the parietal lobe, we know, is particularly good at
and particularly interested in processing space. There is a growing number
of studies showing that the parietal lobe may play
a particularly important role in sign language processing. It’s involved in
spoken language processing, but it may pay an even more important
role in sign language processing. So I’m going to show you… Well, explain one of our studies that highlighted this
a number of years ago. We had deaf signers in the scanner and we exposed them to BSL sentences that either used space
to represent real spatial relationships or that used space
but weren’t talking about real spatial relationships
in the world. What do I mean by this? An example of a sentence with real
spatial relationships would be: ‘The cat sat on the bed’,
which looks like this. You have to pay close attention
to where the hands are to know the cat is on the bed,
not under the bed. Then we contrasted this
with other sentences where space is being used, but we are not telling you
where those things are in real space. A sentence like,
‘The brother is older than the sister.’ I will voice this as I sign it. It looks like this. ‘Brother’… Sorry… ‘Brother’, ‘sister’… So we have the brother here
and the sister here, and I’m telling you
which one is older, but I’m not telling you
where they are in the real world. When we contrasted these two
that are using space, but using space differently, we found greater activation in these
deaf signers in the left parietal lobe, up here, as indicated on this slide, suggesting that these spatial sentences really are recruiting
the special spatial processing skills of the left parietal lobe. Okay, so… What have we learnt so far then? In this whistle-stop tour, I’ve shown you
that there are differences, some of them between how
sign and speech or processed and some we can attribute
quite low level differences between just sound and vision. Others we can attribute
more interesting differences in the use of space. However, I think the more interesting
part of our findings in this field is the fact that we get
this core language network in the left hemisphere that is involved
in language processing, regardless of modality, regardless of whether the language is
coming in through the ears or the eyes. I think the challenge for this research
field in the future is to really drill down into what
aspects of linguistic structure these different regions
are interested in. How are languages represented
in these regions and what processes are actually
going on in these core regions. Okay. So…
I said I would cover two of our areas. I’ve covered
how we can use sign language as a tool in basic science research. Now I’m going to talk about reading. It may come as a surprise
if you haven’t thought about this, but many deaf children do find it
very difficult to learn to read and this is despite the fact that
they have a normal non-verbal IQ. Deafness is not a learning disability, yet the average deaf child
leaves school aged 16 with a reading age
of around 10 or 11 years. It has of course been well established that poor literacy
not only affects education level, but also has lifelong consequences
for vocational attainment and well being. Okay, so why might this be? When we read, we read a spoken language. Even though reading might appear
to be a visual task, we are reading
a spoken language written down. That’s what text is: a visual
representation of spoken language. So for the hearing child
learning to read, their challenge, or one of
the challenges that they face is trying to crack this phonics code. They have to learn
the systematic mappings between their knowledge of speech and speech sounds, in particular, and then letters on the page. They have to make the link
between the fact that the word can be broken down
into the sounds: ‘c-a-t’ and that these map
onto particular letters. Now, deaf children of course, by definition, don’t have
full access to spoken language and to the structure of sounds, and so they struggle to learn to read. This is a good place to point out that broader language skills
are of course also important for learning to read, and sign language
can help deaf children. But with the decoding aspect
of learning to read some knowledge of
the internal structure, the phonology of the word, is needed. So how might deaf people,
deaf children, establish this? So… We… We argue, on the basis
of our past research, that deaf children can establish
some knowledge about the fact that ‘cat’
is broken down into ‘c-a-t’ from lip-reading, from how it looks on the lips. Also, to some extent, from how it feels
when you mouth or say the word ‘cat’. So although they may not
be able to establish as rich representations
as hearing people, as rich in knowledge about
the structure as hearing people, deaf children can establish some
knowledge about the structure of words, the internal structure of words. I’m going to show you some data
to support that statement in a minute, but on the basis of our past research, we have developed this model
where we argue that deaf children
are deriving information about
the structure of speech and vision, how it looks on people’s lips, and that this is
contributing to their awareness of the internal structure of the word,
what we call phonological awareness, and this is then
supporting their reading, just as it is in hearing people. And so… here are some data to show
that deaf adults, in this case, can perform
above chance levels, when we ask them to make
rhyme decisions about pictures. We had a group of deaf adults and we asked them to decide
whether the English labels for these types of pictures
– chair and bear – rhyme. There are a number of things
to take away from this slide. One is that the deaf group are,
as we would expect, poorer that the hearing group But there is great variability
in this deaf group. This whole bar represents
the spread of scores in this group. There are some doing extremely well. There are also some performing
at chance, so not well at all. But as a group
they are performing significantly, so well above chance. And we have done a number of studies
on this, as have others, and even when you’ve controlled
for a number of variables, it seems that a number of deaf adults really can make these decisions
about rhyme. We argue they base this information
on how it looks on people’s lips and also how it feels
when you mouth the word. This motor component of speech,
rather than the sound. Importantly for our argument
that we are trying to make, in this study, better rhymers
were also better readers. This has been shown in a number
of studies with deaf people, although there is great variability. This is a bit of a busy slide,
but the main thing to take away is that in a number of our studies we want to look at those
really good deaf adult readers. Some deaf people
do become excellent readers and go to university
and come to UCL and do PhDs.. We want to know how they are doing it and so we have looked at the brain
networks, the areas of the brain and the timing
of processing in the brain when we ask these good readers to
perform these rhyme judgement tasks. The main thing to take away
is that in these good readers, and good rhymers in this case, similar areas of the brain
are used with similar timing… Gosh, sorry that says ‘for sign
and speech’, and it shouldn’t. I apologise. But similar areas and similar timing is seen in both deaf
and hearing people when they are making
these judgements about speech. That all got complicated.
What does this mean? It means that even though
these representations of words, the internal structure of words, have to initially be
based on different information for deaf and hearing people, at some level, when they are
being analysed and contrasted, then we see
very similar processing going on. We argue that this lends support
to this overall model that knowledge of the internal
structure of spoken words can help reading in deaf children
just as it does in hearing children. And we have more data over time
to help support that argument. Okay, so what we are doing now
is testing this model in a trial. We have lots of deaf children, five, six, seven-year-olds, around the country at the moment playing games that we have developed along with
our collaborators at Cauldron. And we have developed these games. Half the children are playing
lip-reading games, half are playing maths games. We are predicting that those children
that are being trained on lip-reading over a three-month period will have a better understanding
of the structure of words and that this, in time,
will help their reading. We’re in the middle of this trial and
in the summer we’ll know the outcome. I thought I’d show and test you
on some of your lip-reading skills. Hopefully this will work. It’s important your visual attention
is in the right place to start with. You need to focus
up where this question mark is. You’ll see a man say a word,
then you have to choose which of the pictures
match what he is saying. Any ideas? So he didn’t like that. We’re going to watch it again. Let’s see if he likes that. He likes that, that’s good. So as the children get better at this,
it gets more challenging for them, it adapts to their abilities. Over time we introduce letters. Here’s an example
of where we are introducing words. Again, look
where the question mark is to start. We get them to do the reverse. They have to see the word or picture,
then match to the speech. So we are trying in all directions
to train their speech reading and mapping to letters. So hopefully in this very quick tour
of what we are doing I have shown you that language
is more than just auditory speech and we need to take
a broader perspective if we want to think about how
language is processed in the brain. I have also shown you that sign languages
are not just really interesting, but they can also be
a very useful scientific tool to help us find out unique information
about how the brain processes language. Also, I have shown you that
some of our research in this field has the potential to translate and
perhaps have important implications for the classroom. I’m just going to finish by
tempting you to some after-lunch cake. Our department, the Deafness Cognition
and Language Research Centre, is celebrating
the 12 years of BSL recognition by having a BSL Bake Off
in the South Cloisters, close to here. There is no obvious link between
British sign language and baking apart from, if you’ve got
good phonological awareness, they both begin with ‘B’. Please come along, find out more about sign language
and the courses we are doing. I’d just like to finish
by thanking all these people. Thank you. Thanks very much, Maréad. We have time for a few questions. I think you had your hand up first. Thank you. Good afternoon. With the ‘Plebgate’ scandal
with Andrew Mitchell… Yes? Outside 10 Downing Street,
they have state-of-the-art CCTV, yet there was this whole debate
about whether he actually said, “You pleb”, etc. Couldn’t they have got
a lip-reader to look at it? It sounds dim, but there were
millions of pounds spent and people lost their jobs. I was at Jesus College, Cambridge
with him as well. I won’t ask what your… – In terms of lip-reading…
– That’s a good question. There is a field
called forensic lip-reading where absolutely,
there are expert lip-readers who sell their time
and collaborate on things and our collaborator,
named up here, Ruth Campbell, has become an expert in that field. Academically.
She is not a lip-reader herself. However, there are expert lip-readers. But the issue is that you can
never ever be 100% sure. A lot of what is going on
when we lip-read… ..sorry, when we speak,
is behind the lips. So there is always
going to be ambiguity. There are some people
who are really good lip-readers, but they will never be 100%. There would always be some ambiguity. These forensic lip-readers
operate in courtrooms, as do clinical experts, so they give their opinion, but nobody can say it’s 100% right
until you get the audio track. But if I said…
Three consonants and one vowel. That’s pretty easy. But there are… ‘B’. If you just had the ‘B’,
it is easily confused. If I do ‘B’ without the voice,
it’s easily confused with ‘M’. They look the same on the lips. So there is always going to be
that confusion. And context will help with that
and various things, but there will always be
some ambiguity. Thank you for an interesting lecture. I was wondering if you had found
any gender differentiation in developmental learning
and language acquisition. I was also wondering if with speech, if we learn a second language
at a critical age after which, say at 11,
it becomes harder to learn, would that be the same
for a BSL learning ASL? Is there a critical point
at which it becomes much harder? And I was wondering if there were any
insights about language processing and, say, Asperger’s. Neurological conditions
that might highlight how someone with Asperger’s
learning sign language might interpret language, and if that’s different
from people who don’t have Asperger’s. Thank you. So three questions. A test of memory to see
if I can remember them all. Gender. No, that hasn’t been looked at in terms of the neural systems
supporting language processing. It’s very difficult and expensive
to do these studies. We haven’t got the sizes
to look at gender issues, so nobody has looked at that. Development and critical periods. Yes, if you are a native user
of British Sign Language and you want to learn ASL, then you would be
learning it just as… You said age 11 or something. That would be just like
you learning French at age 11, having a first language
of spoken language. That’s a native signer
learning another native language. It would be the same as a hearing
person learning spoken languages. But there are
more interesting questions to ask about sensitive language periods
with deaf children, because, as I pointed out,
only 5 to 10% of children are actually exposed sign language
very early in life, with their deaf parents. That means we’ve got
90 to 95% of deaf children born to hearing parents who
probably don’t know sign language, so there are lots of complexities about languages
they are exposed to and so on. We can ask very interesting questions
about age of language exposure. The third question is about Asperger’s. I’m doing well, I’m impressed. So, Asperger’s… There is some research
looking at autism in deaf children. Asperger’s specifically, I’m not sure
about any research in that field. They were included
in the larger ASD spectrum. If you’re interested, look at research by Tanya Denmark
to find out more about that. You were next, and then you. Well, as he asked more than one… Firstly, I want ask about
if someone has an operation and they get their hearing
for the first time, after critical development. Also, if they are hearing at birth
then they loose their hearing, what effect does that have? Then I’d like to ask about people who
are deaf and dumb, like Helen Keller… – Deaf and blind.
– Deaf and blind, I apologise. And also I was still thinking
about his question up there. And second…
Can you not film me? Secondly, did Chomsky do anything
about the question of deafness? Okay, again, you really want to
test my memory here. Right. I got distracted by the [unclear]. Okay, remind me
of the very first question. That was about people who are deaf
and then they have an operation. I’ll just answer two questions
so that other people can have a go. Perhaps more importantly, countries where women
have to keep their faces covered. You said that visual aspect
is very important, but that must be
a great detriment to them. That’s almost as important
as any other question. Okay. I think visual speech,
visual communication, is extremely important. How that works in those countries,
I’m not sure. How it’s viewed, I’m not sure. Cochlear implants
is an important question. You said “What happens
when somebody has an operation “and their hearing is restored?” That’s a very simplistic way to put it and often it’s the way
people think about it, but actually cochlear implants are best
thought of really fancy hearing aids. For some people they do work very well
and they can offer really good access to the auditory component
of spoken language. But for others,
they don’t work very well at all. The challenge there is to determine
who it’s going to work for and how best to help the person get
the most from that cochlear implant. But there’s a lot of evidence showing that even for those
with cochlear implants, visual speech very important. So in adults, people who are good lip-readers
before they had the cochlear implant often tend to have really good outcomes
with the cochlear implant later. Maybe I can talk
about your questions later. Hi, it’s just one question. Thank you. It seems to me that what you have found is that one area of the brain
is specific in the way that it processes
how it decides to communicate. However, it would then use
other parts of the brain, the visual part or the space part, to use the body to then communicate
that language bit. It seems to me there is
a core area that wants to communicate but it decides to do that
in different ways. Is there any research to look at
how the brain can communicate in a way that is
more accessible for everyone? It seems language has been developed
and dominated by hearing people and right now we’re trying to find ways for people and deaf children to
learn to use our way of communicating. However, they have the same
core processing area in the brain, so maybe trying to develop
a different way of communicating, rather than basing how hearing people
communicate as the main way of doing it and then how everyone else
can more adapt towards that. Is there any research trying to think, not scientifically,
but more socially of how language and communication
can develop away from this idea of only the way
that hearing people developed it. – If you see what I mean.
– I think I do. Two quick things to point out. One is to clarify your summary of
what I showed you in that first part. All of these studies
were people watching sign language, so there’s nothing about production, there’s nothing about
the person in the scanner, while we are scanning their brain,
deciding how then to communicate. They are watching the language
and processing it, doing whatever we are asking them. We see these core regions
involved in both and these others that may be involved
in the two different languages. Your second point, I think that… We are…
So… Sign languages are used by a large sign language community
in this country and other sign languages are used
by communities in other countries. American Sign Language,
Israeli Sign Language… So to address your point, we would want
to encourage more hearing people to learn sign language, and to be able to say good morning,
etc, to signers in their community. I think that would address
your social concerns to some extent. – Thanks.
– Okay. Hi, thanks for an interesting talk.
I also have only one question. I have heard that often
in deaf schools nowadays, instead of teaching BSL, it’s actually mainly through
lip-reading that it’s taught and I was wondering whether this has
to do with the evidence you presented, that actually has to do with reading
and how lip-reading improves reading. Okay. There is great variability amongst education practices
in deaf children. There are some deaf schools
that use a bilingual approach. They will use BSL in some situations and then written English
in other situations. There are some education settings
that are purely oral, so the children
don’t use sign language at all. So there is great variability. It would be wrong to say there was
a push towards speech reading. I think there is still a balance and, if anything, there is a push
towards interest in bilingualism. More children
are having cochlear implants and more children
are in mainstream settings, in schools, with some support,
on their own. A hearing school with some support. Those children more often
are more reliant on visual speech and on lip-reading
and on oral communication. How does that relate to my research? This gives me a really good opportunity to say that I do this research
focusing on lip-reading and the importance to reading as being
just one part of that reading step. And it’s that very early step
into reading, of decoding words
in single word reading. But children have to have
a broader language base upon which that sits to really develop
their literacy skills, and I think a rich spoken language or a rich sign language
that they can fully access will give them that. I’m afraid we’re out of time, so we can’t take any more questions
because the next group is coming in. Join me in thanking Maréad
for a great talk.

Leave a Reply

Your email address will not be published. Required fields are marked *