Radio Inspire

How To Learn Sign Language

Sign Languages and the Mind: Their History, Science and Power, Part 1


– So hello, welcome. My name is Raffaella Zanuttini. I’m a faculty member here in
the Linguistics Department. When I started planning this event with the help of some of our students, I had two goals in mind. I thought we wanted to
learn more about ASL and about sign languages in general: their structure, their
history, their importance. And we also wanted to celebrate the fact that now ASL is a
subject of study at Yale. So students can learn the language and be exposed to issues
relevant to the deaf community. But as we organized the event and got in touch with a
number of people at Yale, we also realized that this gave us the opportunity to do something else, to meet members of the deaf community at Yale and beyond Yale, and to give them an opportunity
to meet one another. This is wonderful because it shows that introducing ASL at Yale is not
only interesting and useful, but it also helps us create a more inclusive and diverse community, and this is something
that enriches all of us. So I’m happy to see you all here. I’d like to welcome all of you, our panelists and our interpreters. And we’re going to start with some opening
remarks from three people who represent three
different groups at Yale: administration, staff, and faculty. So the first one is Dean George Levesque from the Yale College Dean’s Office. (audience applause) – Thank you very much. I really just have a few words of welcome that I wanna extend on behalf of the Yale
College Dean’s Office. I know that we have many
people from, oh, sorry. Is this better? We have, I know, many people
from across the university, you are here today, and I
understand we have some colleagues from other institutions, too
who have joined us on campus. Thank you for coming and
helping to be a part of this. A special thanks to our presenters who have prepared our
program for this afternoon. I also wanna take this moment to publicly thank a number
of critical partners across the campus who
have been instrumental, as Professor Zanuttini mentioned, of bringing ASL into the
Yale College curriculum. I wanna begin by thanking
Professor Zanuttini, who as Director of Undergraduate
Studies of Linguistics was a persistent and eloquent champion for ASL to come into the curriculum. Professor Frank is Chair of Linguistics. It was critically important
to us, in the Dean’s Office, that ASL have a departmental home within the Faculty of Arts and Sciences, and so it has been very
important for linguistics to provide the departmental
home and structure for ASL, so we thank you for that. I thank Jessica Tanner, who has taught ASL through our Directed Independent
Language Study program, our DILS program. You may hear about that
a little bit later. We have been teaching ASL informally through that program
for a number of years, and Jessica has been very
instrumental in that, in developing a loyal
following of students. And I also wanna thank my colleague who couldn’t be here today, I’m sorry, Nelleke Van Deusen-Scholl, who as Director of the
Center for Language Study, supports DILS and funds that program, so I thank her in her absence. And lastly, a few more
people if you don’t mind, my colleagues on the
Language Study Committee and in the Teaching
Resource Advisory Committee. When we received the
proposal from linguistics, it was immediately and
enthusiastically supported by the members of the committee. It was really heartening to see faculty from across a number of departments
immediately jump onboard as this is an opportunity to
bring ASL into the curriculum. And none of this, it would
not have been possible without the support of the Dean of the Faculty of Arts
and Sciences, Tamar Gendler, who received the recommendation from the Language Study Committee and the Teaching Resource
Advisory Committee, and she also warmly received
it and found funding for it, for the inaugural teaching position, so we thank her for that support. And last but not least, I wanna thank the past
and current students who have expressed this abiding interest in studying ASL for a variety of reasons. It goes without saying that we wouldn’t be able
to offer courses in ASL if there are no students who
are interested in taking them, and this has been a wild success. To see the number of students
in our opening sections of ASL bursting at the seams
has been very gratifying, not only to prepare them, I’m sure, as citizens in a more diverse
world, but just the vital and robust intellectual
field of study that ASL is. And so we’re very happy on behalf of the Yale
College Dean’s Office, on behalf of Yale College,
to welcome you all here. Thank you for coming,
and enjoy the afternoon. Thank you. (audience applause) – The next speaker giving
some welcoming remarks is Cindy Greenspun. – Thank you for having me here. My name is Cindy Greenspun, I’ve worked at Yale for 20 years. I work for the Yale Library system. I’ve been deaf all of my life, I lost my hearing when
I was two years old. I was mainstreamed in the
public school system in Austin, I was the only deaf person in those schools that I’ve attended. It was a struggle to keep
up with my classmates, and my grades were nothing to be proud of. I made it to college and my
first year was also a struggle until I realized that if
I learned sign language, I might get the help and
the support that I need to keep up with my classmates. So I made it a goal to
learn sign language, and the next semester, I’m happy
to report to all of you that for the rest of my
semesters in college, my grades were As. (audience applauds)
I was very proud. American Sign Language is amazing. What’s even more amazing, and I’m really grateful it
got American Sign Language as part of the curriculum at Yale. And I hope that it’s as useful, as helpful to others who might be in the same
sort of situation as I, or to anybody who’s hearing or not. So thank you for all of
your hard work to everybody, to the Yale community who
have bring sign language to the university. Thank you. (audience applauds) – Bob Frank, who’s the chair
of the linguistics department. – So I would like to welcome
everybody who’s here, and thank our speakers
for coming and for sharing what I’m sure will be a
fascinating afternoon. I’d also like to echo the thanks that Dean Levesque made earlier, which actually I have
exactly the same individuals listed on my paper in front of me but I will not repeat them for you, but thank you to everybody
who’s been involved in this extremely vital
and important initiative in increasing diversity at
Yale in a very important way by incorporating ASL and deaf culture to studies at Yale University. So, one of the things that
Dean Levesque mentioned was that it was really crucial in bringing ASL studies to Yale was for ASL to have a departmental home, and the question of
where that should be was, in a sense, for linguists,
an easy question. That we were very, very enthusiastic, all of our colleagues were enthusiastic about being that home because of the importance of
the study of sign language to the study of language in general. From the perspective of understanding the nature of linguistic structure, the study of sign language is
an important domain of inquiry because one is tempted to try to relate the nature of language to
the fact that it is spoken, and so one tries to derive, in part, some of the linguistic structure from the constraints that are imposed by the need to speak or the need to hear. And sign language poses
an interesting alternative in that the different
modality that is used for its speaking and perception
raises other kinds of issues and raises the question of, well, is sign language just fundamentally of a different sort? And what has been discovered in the work over the last
almost half-century now has been that essentially no, that what we learned from sign language is that there’s
fundamentally a commonality in human experience, in the human mind, that tells us that sign
language is a language, and its structure and its
underlying cognitive mechanisms are the same as those
involved in spoken language. And that’s been profoundly important, I think, both within linguistics, but also in understanding
our shared humanity. The second area in which sign language has been scientifically
significant comes about as the result of some distinctive issues that arise in the experience
of the deaf community. Some of these are
extremely exciting issues, other things are unfortunate
in the deaf experience. So because of the unique
experience of many deaf individuals not being exposed to sign
language until later in life, this allows the study
of unique experiences that are characteristic of
what happens to individuals when they’re not exposed to a
language until later in life, and how does that impact the way in which they learn language? So these kinds of studies
have been extremely important and have been very enlightening. In fact, our first speaker,
so I will now transition, our first speaker today will be telling us something about this
kind of unique experience that arises in sign language. In this case, in the domain
of Nicaraguan Sign Language. So our first speaker, Annemarie Kocab, got her Ph.D. from Harvard
in Psychology in 2017. She is now, continues
at Harvard as a PostDoc working jointly with Jesse Snedeker, who’s a professor in the
Psychology Department, and our own former PostDoc, Kate Davidson, now of the Harvard Linguistics Department. Annemarie’s work is a
sort of remarkable example of someone who combines,
in an exciting way, a careful and refined understanding
of linguistic structure and of the dimensions of variation that inform what goes
into linguistic structure together with a profound understanding of the cognitive foundations and the cognitive factors that are involved in language learning
and in language change. Much of her work has studied the unique experience
of signers in Nicaragua, where a new sign language
has come into being in the recent past, and
so she has focused on this case study of language
emergence over the short term, looking at a variety of
questions from referential shift, to the formation of questions, to the existence of recursive structures. She’s also looked at a
variety of other factors, including the representation of telicity and the use of non-manual
markers and iconicity. So it’s a real great pleasure for us to have Dr. Kocab join us and give us our first
talk today, so welcome. (audience applause) – [Interpreter] Thank you for
that wonderful introduction, and thank you for having me here. It’s a very exciting event, and thank the organizers
for planning all of this. Language is universal and ubiquitous. All human societies in
the World have a language. The question is, where
does language come from? There are two kinds of explanations that are not mutually exclusive. One is that the structure of language comes from biological evolution and reflects properties of
our cognitive architecture, how our brains have evolved. The second is that language reflects a product of cultural evolution, so the kind of human capacity for social learning and
cultural transmission. Uh-oh. (audience laughter) Here we have a timeline that helps us to understand
these explanations. The dotted line in the
center represents the time when Homo sapiens dispersed from Africa around at least 70,000 years ago. Because all humans have language, any mechanism that allows
us to acquire language must have been in place before
we migrated out of Africa. On the first explanation, where the structure of language comes from biological evolution, we have a timeline like this. On the y-axis we have
linguistic resources, and on the x-axis we have time. On this position, over thousands of years, evolution selects for adaptations that allow for human language creation, meaning any individual mind is capable of creating
and acquiring language. On the other extreme, language comes from cultural processes, similar to how Mathematics
and Science were constructed over thousands and thousands of years with each generation building on the work of the previous generation. So getting data to help
us address the question of biological versus cultural
evolution is difficult, and most languages have been around for hundreds of thousands of years, and their patterns of emergence
are largely lost to time. But we do have several tools to help us get at this question, one of which is the study
of emerging languages. In this talk, I’ll be talking about one particular language in Nicaragua. If you go to Nicaragua
today, you’ll see a community that looks like many other
language communities. If you go to a school for the deaf, you’ll see the children
there are hanging out, playing and interacting
all in sign language. That language didn’t exist 40 years ago. Because it emerged so recently, the people who created the
language are still alive today and we can study them to better understand where language comes from. Let me tell you a little bit about the history of
Nicaraguan Sign Language. You may know that most deaf children are born to hearing parents
who do not know sign language, meaning that they have
no accessible language in their environment. That was the situation in
Nicaragua until the 1970s. In those situations, deaf
individuals are propelled by a social desire to
communicate with other people, and they’ll create gestural
systems called Home Sign Systems to use with their family members. These systems are internal to the family, they’re not used outside of the family. The situation in Nicaragua
changed drastically in 1970s when the government established a school for Special Education that pulled together deaf
children in large numbers for the first time. They did not use sign language as a method of instruction in school, they used oral methods, but
the children brought with them their individual home sign systems and began to gesture to
communicate with each other on the playground, in the bus, and at that point, language
began to emerge and develop. We call the first group of
children to enter the school the first cohort. What allowed the language
to continue to grow is that another group of children, additional groups of children continued to enter the school and learn their language from
the first group of children. For the purposes of comparing how the language has changed over time, we’ve divided these children into groups of roughly 10-year increments. The second group is the second cohort, and then the third cohort came in. Each cohort represents
a successive time point in the development of the language. Not all deaf individuals
enter the deaf community. There are many who never learn
a sign or spoken language, so their Home Sign Systems remain their primary mode of communication. These systems represent
the kinds of systems that individuals in
isolation come up with. I’m going to show you brief snapshots of what Nicaraguan Sign
Language looks like. I know you don’t know NSL, and I’ll be telling you what
to look for in each clip. The first video that I show you will be the language of the
first cohort of signers. You’ll see that their signs
are larger, slower, measured, and there’s not a lot
of rapid turn-taking. Now I’ll show you the
language of the second cohort. These signers as children learned their language
from the first cohort, and you’ll see that their
language looks different. It’s faster, there’s
more rapid turn-taking. And now I’ll show you a
video of the Cohort 3 signers who learned their language
from the second cohort. Their language looks very different. It’s even faster, there’s a lot of simultaneous
communication going on, very interactive. As you might suspect, biology
alone and culture alone can’t fully explain what we’ve seen so far with the first three
cohorts of NSL signers. It seems that language does not take hundreds of thousands of years to develop, but also it does not arise immediately in the first generation fully formed. We see fast emergence of words and the linguistic capacity
to express propositions, so the story is partly biological. But as this research program has developed over the last 20 to 30 years, we found that some linguistic structures take additional time to develop. For example, the marking
of argument structure, which indicates who did what to whom. These intergenerational
processes happen rather quickly in contrast to the typical
construal of cultural processes as long and protracted ones. Therefore, we have a third alternative. The pattern of language
emergence looks rather like this, where we have contributions from biology that equip our minds to create language with cultural interaction that allows for further
linguistic devices to develop. Let’s zoom in for a moment. On the x-axis we have
the home signers here because these are systems that represent the kind of systems that
individuals who come up with these as a consequence of biology
and living in human society. The first three cohorts are placed here. These boxes are some of the
things that we’ve seen develop as NSL has developed and changed. I’m not gonna talk about
all of these things, but you do see that some things come up in Home Sign Systems, some emerge in the first cohort of signers as soon as the community forms, and other things take
more time to develop. In today’s talk, I’m gonna talk
about two specific projects in which I’ve looked at
the linguistic structure of NSL, recursion, and quantifiers. These case studies show
how studying sign language can help us understand
more about language itself and how it emerges. These cases are interesting to linguists because these are places where the mapping between syntax and
semantics becomes complex. The first project I’m going to look at is the historical evolution of recursion, which is proposed to be a
property of all human languages. You can think about
recursion at two levels. You take the sentence, John
knows Mary knows Bill lied. On the meaning side, we have
a knowledge state, John’s, with another knowledge state
embedded in it, Mary’s, with a third event, Bill’s
lying, embedded in that. On the level of syntax, we have a simplified phrase structure with a clause embedded within a clause with yet another clause embedded in it. So recursion is argued to allow language to have discrete infinity, where you could compose an
infinite number of utterances with a finite set of words and rules. Recursion is often thought to
be universal and ubiquitous, and some have made the strong claim that it’s the only thing
that separates human language from animal communication systems. At least one language has
been argued to lack recursion, Pirahaã in the Amazon. Instead of producing embedded structures, Pirahaã is argued to use parataxis. So instead of saying something
like John’s sister’s house, which is embedded in English, they would instead produce a flat structure with no embedding. John has a sister. That sister has a house. This claim is debated. Part of the difficulty is the difficulty of
interpreting a lack of evidence, and also there are differences in opinion on how the language is analyzed,
so we’re at an impasse. Is recursion a fundamental
property of human language? We designed studies to go in and analyze Nicaraguan Sign Language signers and home signers to find out. We looked at if recursive
structures are available to specify a previously
mentioned character, to pick them out from a set of previously mentioned characters, to pick out a specific individual from a previously mentioned
set of characters. So that function is often
accomplished in language through the use of relative clauses, where you have a clause
embedded in another clause. We designed events and showed
movies to our participants. I’m using still pictures
for the sake of time. The first condition was designed
to elicit relative clauses. You have a set of girls, each
doing something different, and then one of the
girls does something new. The target description
will be something like the girl who was drawing
removed the picture. The second condition was designed to elicit conjoined clauses. By removing the other two characters, we expect to elicit descriptions with a flat structure without embedding. A girl drew and then removed the picture. The third condition had the same character performing the same action twice, drawing on a piece of paper and then drawing on an easel pad. Relative clauses are not the only way to describe these stimuli. A person could use parataxis instead where they mention the
girl and the action twice. The girl drew. The girl removed the picture. That’s the kind of structure
that we would expect if the language lacks recursion, like what we see in Pirahã. We predict that if the
language has recursion, then the ability to convey
embedded messages would be there. We expect them to produce utterances that have the meanings of relative clauses to establish a set of characters, and then pick out an
individual from that set and predicate something
new about that individual. On the syntax side,
prior observation of NSL allows us to predict what
relative clauses might look like. Many sign languages do not
use complementizers, like who, to mark relative clauses. But verbs in sign language
can make morphological changes in size and duration and motion. So we expect that the
verb, when it’s embedded, will be shorter and
faster compared to verbs that occur in non-embedded contexts, providing specific visual
evidence of embedding. I’m gonna show a video of a
signer who does exactly that. You’ll see that she
first establishes a set, picks out one person, and
then says something new about that person, and
when she describes that, she repeats the verb. It’s smaller and faster
than the first time that she describes it. I’ll show you the videotape. It’s slowed down, and it has
glosses for your convenience. I’m gonna play it again. I want everyone to see that eating the apple the
second time was shorter, so let’s watch it again. That kind of description is
our candidate relative clause, where signers established a set and then pick out an individual. The underlined section
specifically is our relative clause with the repeated verb. We saw signers from all three cohorts produce those kind of repeated strings on the relative clause trials, but not in the other two conditions. We interpreted this as a relative clause rather than parataxis for two reasons. The first is that in theory, that string could be
considered as conjunction. But that interpretation is
unlikely in this context because the verb, eating an apple, has already been completed, and
it’s already been mentioned, and therefore it’s
relevant only to the extent to which it identifies
the girl under discussion. And the second reason is because the verb itself looks different when it’s embedded in a relative clause as opposed to when it appears
in a conjoined clause. We looked at the form of the
verb in the two conditions. To quantify this, we took the ratio of the length of the first verb, draw, to the second verb, remove the picture, and what we found is that
the verb draw is shorter when it’s in the relative clause. As you can see by the smaller purple bar compared to when it appears
in the conjoined clause, where you see by the longer purple bar. So we suspect that the shortening is a morpho-syntactic
way of marking embedding. To make sure that the
shortening isn’t happening anytime the same verb is repeated, we looked at the repeated action condition where we had the same person doing the same action
twice, drawing and drawing, and we do not see any
shortening of the verb there, so it seems the shortening
specifically occurs when a verb is embedded. So that raises the question, can individuals who have not been part of a language community
produce relative clauses? We tested adult home signers in Nicaragua, and I would remind you
that these are individuals who have never learned a
sign or spoken language, and their home sign systems remain their primary
mode of communication. We tested four adult
Nicaraguan home signers, and first we checked to make
sure they understood the task, to see if they described
the target character and the critical action, and they all did. When we look at the descriptions
in the different conditions we see a different pattern compared to the Nicaraguan
Sign Language users. On the relative clause trials, the home signers did not introduce the set or repeat the identifying verb. In fact, their descriptions
looked very similar to what we see in the
conjoined clause descriptions, where they describe the target character and the new action only. The verbs don’t look any different in these conditions either. So there are three possible
interpretations of this finding. The first is that perhaps home signers are producing embedded verbs, but they’re not marking
them with the shortening. The second possibility is maybe that home signers can’t conceive
of recursive expressions. The third possibility is that home signers have recursion in mind, but they’re using parataxis
to communicate that. Based on prior work on
the kinds of messages that home signers are able to convey, we suspect that the third
hypothesis is the correct one. So that home signers do
have recursion in mind, but they produce paratactic utterances, similar to what we see in other languages that are argued to lack
recursion, like Pirahã. So we see that the basic
ability to produce recursion is present very early in the
emergence of a new language. All NSL signers across all
three cohorts differentiate between embedded and
non-embedded contexts. Home signers do not. They may be using parataxis. So what allows recursion that’s present, to be present in Cohort 1
signers but not in home signers? A key difference between those two groups is that Cohort 1 signers are
part of a language community. They have the opportunity to
interact with their peers, and to use their peers’
utterances as input to their own. So it’s possible that
the home sign messages that had embedded meaning
became input to deaf children, who then reanalyzed the language to create new linguistic structures. Now I’m gonna move on to the second part, or the second project,
looking at quantifiers. Quantifiers in natural languages express the range of properties between, the relationships between properties, like some birds are flying. Quantifiers allow us to
form generalizations, like most bananas are clones. Most, maybe all languages
do have quantifiers, so maybe there’s something
special about quantifiers that can help us understand
more about human cognition. We tested this by looking at quantifiers in the first three cohorts of Nicaraguan Sign Language users. As I said, quantifiers
express the relationship between two properties, so we elicited quantifiers
by showing signers pictures with large sets of individuals like bears. Then we showed them two paired pictures. One is a subset where a subset is engaging in an entirely new activity,
in this case swimming, and the difference is
the size of the subset. So we would expect something like some of the bears are swimming, and here we would expect all
of the bears are swimming. Here’s another example intended to elicit different quantifiers. By contrast, we would
get descriptions like none of the boys climb the tree. And here we could get many
of the birds flew away. We looked at three types of quantifiers. Universal quantifiers, which are used when
every member of that set satisfies the proposition, all. So really, given any
X, property P is true. I’m gonna show you an example of a sign from Nicaraguan Sign Language for all. It’s very fast. All. That’s their sign for all. The second quantifier that we elicited were existential quantifiers, where the propositional
function holds true of at least one member of the set. Many, some, a few, a little. Technically, an existential
quantifier can be used in a context that it would be described by a universal quantifier. For example, if all the
boys climb the tree, it’s also true that some
of the boys climb the tree, but that tends to be less
pragmatically preferred and the reverse is not true. You can’t say all of the
boys climbed the tree if only some of them did. Here’s an example of one sign for the quantifier many in NSL. Many. The last quantifier type that we elicited is negative quantifiers, the negation of an existentially
quantified statement. So it’s not the case that
member P has property X. This is an example of
the sign for none in NSL. None. For the results, I’m gonna start with
existential quantifiers first. On the x-axis we have cohort, and on the y-axis we have the trials. The colors represent
different quantifier types. Yellow is for negative
quantifiers like none, green is for existential
quantifiers like many, and blue is for universal
quantifiers like all. And what we see here is that
when signers see a picture designed to elicit existential
quantifiers like this, they produce existential quantifiers. Many birds flew away. Now I’m gonna show you the results for the negative quantifiers. These were just designed
to elicit things like none of the boys climbed the tree. The colors mean the same
thing as they did before. The yellow bar means
that we see a lot of use of negative quantifiers
across all three cohorts. We also see some occurrences of existential and universal quantifiers. When we look more closely at
those types of description, they were using them paired
with different predicates. Many of the boys were standing, instead of none of the
boys climbed the tree, which is a perfectly acceptable way of describing that
picture, but it’s important that they do have negative quantifiers in all three cohorts. Now I’ll show the pattern for
universal quantifiers, all. We’ll see a different pattern here. We see that the first cohort signers tend to use existential
quantifiers in contexts that where it’s preferential
to use universal. So if you see a picture where all of the boys climbed the tree, they tend to say many boys. But the younger signers
from Cohorts 2 and 3 tend to use more universal quantifiers, all the boys climbed the tree. The use of the existential quantifier, as I mentioned before,
is not strictly wrong. It’s true that there are many
of the boys climbed the tree, but it’s under-informative. A strong statement would
be better, all the boys. For example, if I told you
that I ate some of the cake, but in fact I ate all of
it, you might be mad at me. (audience laughs)
Because you thought, where’s my cake? One possible interpretation
of this pattern is that the pragmatic process
that underlie quantification, is something that takes time
to develop in a new language. For future work, we’d like
to pursue this further, and we also plan to take a
look at this in home signers to understand if quantifiers is something that people who are not in a
language community can express, like… – Stay tuned.
– Stay tuned. (audience laughs) So we see evidence for quantifiers in language early in NSL,
which suggests that quantifiers could be a universal aspect of language that emerges early in a
newly developing language. I started my talk with the question where does language come from, and I hope that you’ve learned from this that the study of emerging sign languages really can help us address this question. This work demonstrated
several different things. First, humans are very well
equipped to create language, and the moment we have
the opportunity to do it, we do so and build a language. Secondly, full language does
not spring from a single child. There are many domains that require time or other structures to emerge, but it doesn’t take hundreds
of thousands of years. Otherwise, we wouldn’t see them here. Thirdly, the situation in
Nicaragua is rare and important, and we have an opportunity to see the emergence of a language in real time. That language occurred when deaf children were brought together in a school. What’s interesting is that that situation is not that unique. Many sign languages around the
world developed in that way. So in the next talk by Dr. Amber Martin, she’ll be discussing some of the effects of language deprivation. I wanna highlight here that children are
capable of amazing things if you just give them the opportunity. Thank you. (audience applause) Thank you all for coming. Thank you to my participants, my funders, and my collaborators. – So we have about 10
minutes for questions. And we have microphones,
so raise your hand and we will bring you the microphones. Excellent. – Thank you.
– Thank you. – [Male] Thank you, that was wonderful. I wanted to go back to something that you were pointing out towards the end in the quantifier study. As you know a lot of psycholinguistic work has tended to show that
in truth value tasks, children tend to be more logical, as Nowak and his colleagues put it, than adults in the sense that children are more likely
to judge sentences true if they have the form
some of the boys left even though all of them did, and adults are much more likely to take the upper bounding implicature
into consideration, interpreting some as some but not all, and therefore rejecting sentences like some of the boys left if all of them did. So in a sense, as the Cohort
1 turns into Cohort 2, we have a case of ontogeny
recapitulating phylogeny in the sense that the
development of the language mirrors the development
within individuals. – [Interpreter] That’s very true, and that’s something
that we’ve thought about. One thing that I would point
out that’s a bit different is that typically these children
comprehend the, don’t… The tasks with children
are comprehension tasks, but typically what they produce sentences, they’re not using
under-informative messages. They tend to express all, and
that’s different from this. These are production tasks. So there does seem to be a parallel, but there’s something different
going on here potentially. – [Male] Thank you, so parallel a bit across different modalities then of production versus comprehension. Thank you. – [Woman] Hi, that was a great talk. So we know in the early
work by Senghas and Coppola that NSL signers used to
use paratactic structures for two animate arguments, such things like woman push, man fall, and people have interpreted it as that transitivity hadn’t
emerged yet in the language, and that they prefer to
use paratactic structures. I don’t know if you have any ideas about why there seems to be like discrepancy between being able to embed with like relative clauses
across all cohorts, but there was this earlier finding about not being able to have
two animate arguments in the single predicate. – [Interpreter] I wanna
modify a little bit. So they didn’t say that they can’t express
transitive sentences, it’s that when there are
two animate arguments, it seems to be that each
argument gets its own verb. So man push, woman be pushed. And that’s a perfectly fine description, so the argument expression… Wait, wait, wait. Does that match your
understanding of their work? I wouldn’t say that they don’t have the capacity to express
transitive sentences. – [Woman] I don’t really
have a stance on them, I just know that that’s
one of the interpretations of their findings from,
I think, other people. – [Interpreter] I think the question of, why do things appear early,
why do they come up early and other things come up later? Right now, we don’t know enough, and I think this is work that will show where we study different domains to understand how the
language structure emerges and give it some comparison studies between home signers and NSL signers, but they can express transitive sentences, so I wanna make sure that that important point is made clearly. – [Woman] Thank you. I just have one more. So in your slides you
mentioned spatial agreements and using space for grammar
by the third cohort, but you didn’t include person agreement. Is there any specific reason why? – [Interpreter] Because we
haven’t researched that yet. – [Woman] Okay, (laughter) thank you. – [Interpreter] But that’s a great area for future research, absolutely. And actually I have a study that is going in that
direction at some point, but yes, person agreement,
that’s interesting. – There’s a question
over here to the right. Oh, we need an interpreter
to voice her question. – [Woman] No. – No?
– But if it’s easier, I can sign. Okay, sorry. I was just looking at your slide about quantitative qualifiers, and on universal for the third cohort, for the second cohort, you see
them entirely using universal when universal is what’s targeted. However, for the third cohort,
some go back to existential, and I was wondering if you had a theory as to why that might be. – [Interpreter] I suspect that’s probably just a measurement error. I think we may have less
signers in that cohort, maybe, I’d have to look
more closely at it, but it’s possible that when this, the signer was describing another picture. But at this point, I’m not
likely inclined to think that that’s a meaningful difference. But if it is, then of course that’d be something to look into, but I suspect that it’s not. Good question though, thank
you for asking about that. – [Woman] Thanks. – [Woman] Thank you very much,
this is very interesting. I had a question about the
relative clause findings where you showed that there
was a significant difference in the length of the verb. But you had also mentioned,
and it was evident in the video that in the embedded context, the movement of the verb
was repeated fewer times, which makes sense. You need a longer time to
have more movements in there. But I wondered if you had coded for anything other than length, and whether there were
other findings as well that were not significant. – [Interpreter] No, we
coded only for length. And… I’m not sure that I meant to say that the movement of the
sign itself was less. The sign happens more quickly so rather than some saying eat
apple with a larger movement, it was as much smaller. It could be that there’s the
same number of movements, it’s just that it’s there’s
less distance involved and it’s faster to complete. One thing that we plan to do,
but have not yet looked at, is to take a look at
the non-manual markers so non-manual markers, for
those of you that don’t know, are grammatical features of sign languages that tend to carry a lot of
information for it on the face. For example in American Sign Language we mark conditionals, like
if, with raised eyebrows. If it rains class will be canceled. And if there was no eyebrow raised, it would mean that it’s a
declarative sentence instead. It rained and class is canceled. So we plan to take a look
at those kind of markers with this data. Because non-manual
markers tend to go along with relative clauses
in many sign languages. – [Woman] Thank you, that would be very interesting to hear about. – [Female] Hi, thank you
for coming and speaking. Please bear with me while I
try to formulate this question. – [Interpreter] Sure. – [Female] I noticed in
the slides trying to figure when I’m watching the different slides about the, was it bears swimming? – [Interpreter] Yeah the quantifiers, yes. – [Female] Thank you, right. What I noticed there were, one of the images showed the birds flying away from the tree. And I look at that and it feels causal, like something happened
to cause the birds to fly away from the tree. Now of course the same thing can be said, something happened to cause the boys to climb the tree and get the apples or to cause the bears, but it seems more naturally in an environment for a child, maybe, to
see birds fly from a tree. Did, this is where my bearing with me is a really great thing, so thank you. Did it at all, do you
think, effect the study the nature of those images about
birds flying from the tree, like a shotgun went off
or a car door slammed? So for me it feels more causal than the bears went swimming because XYZ. It’s less causal. – [Interpreter] That’s an
interesting observation. I think there’s a few
things to say about that. One is that we didn’t observe any kind of expressive language in
both the English speakers that we normed a study on or NSL signers. But suppose we had… Well it’s not clear having a causal event would affect quantifier structure because it still shows the difference even if something caused
the birds to fly away, there wouldn’t be a difference in the size of group of birds that flew away and what we were after was
that quantifier difference. Whether they wanted to
say some birds flew away or all of the birds flew away. – We have one minute left so
if you have a quick question, I think Nanyan has a question here. – [Nanyan] Oh, it may
not be a quick question. So I’m not very familiar with
the inflection system of ASL, but I was wondering so on the
topic of relative clauses, if we had a more elaborate description, say an action depicted by
even like with classifiers, what would happen in that situation? ‘Cause there’s more motion involved, so in terms of the like reduced version and the relative clause, what are the things that will be reduced and how is that different from, say establishing an entity using space and then referring to
that entity in space? Would that be considered a paratactic use or a relative clause? – [Interpreter] Those
are all great questions and I’ll try my best
to answer them quickly. In terms of whether we see
those kind of shortening with other kinds of structures
or classifiers or whatever, we plan to test that. We specifically did this
study only using action verbs that tended not to be using classifiers. ‘Cause classifiers, as
you know, are complicated. And they’ve got a lot of
things in them, not just verbs. They’ve got information about
the noun in them as well and some people have argued
that these are gestural or they’re another linguistic structure. But your question is a good one and we do plan to study this to
figure out how far we push this. Is it only with verbs? Can it occur with double embedding? With subject and object linked? So all of those things
are future directions and you’re on track with that. And your other question
was about pointing in space and if we would count that as parataxis? So the analysis of pointing is very complicated
and there’s a lot of… there’s not a lot of consensus in sign language literature
about what those mean in their analyses. But it’s not about parataxis. It’s definitely anaphoric. Often it refers back to something. But demonstratives, definates, pronouns, we don’t know yet what
for sure what they are. – [Nanyan] Thank you so much. – Wonderful, thank you very much. Our second speaker will be
introduced by Jessica Tanner. – [Interpreter] Yay. Hello everyone, Hi. My name is Jessica Tanner and this is my name sign. It’s so good to see so
many people here today. It’s terrific. So, I just wanted to let you know that I am the first ASL
lecturer here at Yale University Yay, right? Bravo! So maybe, I’m not sure, but I
could be the first deaf lector here on faculty? I’m not sure, I might have to
do some more research on that and find out if I’m the first. But yeah, it’s all good. So thank you so much to Yale University for successfully having
ASL for credit here. It’s such a tremendous thing. So we want to keep that going,
so congratulations to Yale. It is my great honor here to introduce the second panelist today. Her name is Amber
Martin, Dr. Amber Martin. She has her Ph.D. in Psychology, and she’s from the
University of Minnesota. And that’s where she
received her Ph.D. in 2009. So now she’s a professor of Developmental Psychology
from Hunter College. So that’s really impressive, very nice. Her research is focused
on language acquisition and how it influences, how that language
deprivation has an influence. And her specific concentration… her specific concentration
is on language acquisition and a parallel with cognitive development. And she uses two different
languages for her research. She uses American Sign Language and Nicaraguan Sign Language
as well to study these. And she compares and analyzes how these, the interplay between these two languages, I mean how these two languages… She uses these two languages to study the interplay between these two ideas. And how this affects the
language development of children, of deaf children in particular. So I think this is a
really interesting topic. Are you all really
curious to find out what she’s gonna speak about today? Yeah? So welcome to Dr. Amber Martin. Yay, thank you for coming. (audience applause) – [Interpreter] Good
afternoon, thank you so much. I’m honored to be here today and I very much appreciate the invitation and I welcome the opportunity to invite you all to my campus. As you just heard, Dr. Kocab talked about the
emergence of a new language and how it changed over time. The life of that language is a living, breathing, changing thing. Now I’m going to
transition to talking about language emergence and
growth in individuals. How individual children acquire language. As was previously mentioned,
my research looks at the effects of language deprivation and how that affects deaf children. What are the impacts of
early language acquisition and how does it affect
cognitive development? As mentioned in the introduction, my research centers on this question. How do language and
cognition shape each other? This is a very broad question, clearly. So the research could
go in many directions. One question I’m interested in is about which language you
have and when you learned it. How do those things affect
and shape you as a person? How do they shape the
user of the language? This is an interesting question to me. Here’s another question, and
it really is full circle. What about the user? Do you, as an individual
language learner and user have an influence on the language itself, the language’s developing structure? So it’s a cyclical question. And there’s an interplay here
with a bi-directional impact. I’m going to spend my time now talking about the first question. The language and how it
interacts with the children. But again, this question
does go bi-directionally. Many of you may have some knowledge about these statistics having to do with language experiences of deaf children. You may know that a very small
percentage of deaf children are born into an
environment where they have full access to a language because they have deaf parents
who use sign language. We call this group Deaf of Deaf or DOD. The rest of deaf children, 95 percent, experience a wide range of linguist experiences here
in the United States. There are some deaf children
born to hearing parents, typically who are not exposed
to sign language at birth. Hopefully that will change over time, but currently it is the state
that most hearing parents, when they have a deaf child, that’s usually the first
deaf person they’ve ever met. Also among that 95
percent of deaf children, we see a very wide range
in language experiences. There are a variety of
intervention programs, educational approaches, some involving sign language, some not. Some focusing on speech only and sometimes they’re paired
in different ways. Also there are various sign systems that children are introduced to. So it is a very wide-ranging dynamic. What that means for deaf children, in terms of early language acquisition and their early language experience, is that there are a variety of experiences and the quality of those
experiences differ. There’s a great variation, and it depends on do they have good quality
access to language? That experience varies greatly. It depends often on the
quality of the input that a child is receiving. That applies not only to spoken language, but sign language as well. The quality of input can vary. And that quality of language input depends on the skill level of the adults in their environment. It also depends on factors
that are attitudinal about language exposure
for early children. That also varies greatly. A third factor that is
critical is time of exposure. The timing of a child’s
first language experience and that varies greatly
among deaf children, as well. There are infants who are
exposed to language from birth whether it be a sign language or spoken language with
cochlear implant, at times. The first exposure to language,
that initial experience, sometimes happens far later. And from an early age to an older age, and everywhere in between, there is no profile of
the deaf student or child. It varies greatly. And that’s been one
reason it’s so difficult to study how do we
characterize deaf children and their language experience. This chart is about spoken
language trajectories for deaf children with cochlear implants. The gray area is children who are hearing and their trajectory. The rest of the lines
reflect deaf children. What this demonstrates is
that it’s all over the map. Right, you see a wide variety. So there’s a great deal
of variability here. That early language experience
is difficult to capture well. As a result, many deaf children experience what we call language deprivation. We don’t actually have a
standard definition about what language deprivation
means, so here’s a definition. It’s true the experience
is extremely varied and it makes it very
difficult to characterize. Those who studied deaf children
do agree that deaf students often do experience some
degree of language deprivation. It’s quite common. Language deprivation really means that a child was not exposed to language that was age-appropriate. And there’s a great degree of
a lack of exposure to language during the critical
language learning period. So what I just described shows the picture of deaf children here in the United States and the variability in that experience. Now I’d like to talk about
the situation in Nicaragua, and the early language experience
for deaf children there. There are some similarities
between Nicaragua and the United States, in that
respect and some differences. Among the similarities are that most deaf children born in Nicaragua have parents who don’t know sign language and who are not deaf. Also in Nicaragua, there is no standard. There are no standards
for early intervention. They don’t have programs
to support parents after they have a diagnosis. In Nicaragua, the deaf children there, we know clearly, that their first language experience is not until they enter school. The age of onset of
language acquisition happens at the same time as
their entry into school. So the year or the day that they start being exposed to language
can be clearly identified because it’s clearly indicated by the time at which they entered school. In addition, children who
come in to the school there are exposed to the same peer group and others in their environment, so they have a consistent
language exposure experience. So some of those early
language experiences are similar to those of students
in the United States, but there are some areas that are unique. This brings me to my research question: How are language and
cognition interdependent and what are the
consequences linguistically and cognitively of early
language deprivation? So just because of the
modality of signed languages, the spatial structure is very rich. We use space in sign
language to talk about space but for many other reasons, as well, and to express many types
of grammatical structures. So automatically we
see that sign languages are very rich in what we can do, in terms of spatial information
in studying these languages. we can talk about things that are physically in our environment and point to express that
information in shared space. We also use space for co-referencing. So if I want to talk about two
people, a teacher and a TA, I can establish those two
reference in a low sigh, to indicate the two different people. But it’s not only used for
talking about real things. We can also use space to talk
about cognition or ideas. We can talk about comparing two ideas and how those two ideas are interrelated. We could talk about justice,
for example, here on my right and freedom on my left. And talk about how those two
things are related using space. Also, in the syntactic structure
of the language itself, in ASL there are rich uses of space and here are some examples of that. I can say here’s the teacher and the teacher gave a paper to the TA to distribute among the students. The directionality tells
you who’s doing what to whom and also modifications can be
done to the verb using space. That modality question is very rich because of the way signed languages make use of signing space. In Nicaraguan Sign Language,
there are some similarities in the way they use
space in their language and research has shown how emergence of spatial modulation has appeared in Nicaraguan
Sign Language for verbs. I have a few things I want to show you that aren’t specific to Nicaragua but other sign languages as well. The way use of space is
organized can be horizontally, where we tend to place things in space across a horizontal plane. You don’t have a person
up above and down below. In theory we could use a vertical plane in which to organize that, but typically that’s
not the way it’s done. It’s the horizontal plane. But we also do that
for different purposes. We can use a vertical
organization to talk about things like relative states,
or a relative status, excuse me, of people. So the horizontal plane
is very commonly used. So the modality-specific considerations give us an opportunity to explore the relationship between
the use of space and non-linguistic cognitive spatial skills. And this isn’t new research,
it’s been around for a while looking at this question but it’s a great place to start
because again, of the richness. of the way signed languages use space. Different studies have in fact shown that signers use mental rotation. Emmorey looked at this in prior research and my work has as well. When we see people who use sign language who are acquiring sign language, we look at their ability to
mentally rotate the formations, and, that’s transformations,
in the linguistic domain. So now I’d like to look at this question. Let’s go back to the importance
of age of acquisition. Does that impact or does that relationship between language and cognition depend on the age at which
a child acquires language? And when do those benefits appear? Do they appear early or does
it not happen until later? At what age do they emerge? Also, when we talk about benefits and the use of space and
spatial cognitive skills, are there specific devices that
are used in sign languages? How general are they and how specific are those
benefits that are carried from the language over to cognition? We wanted to develop
a mental rotation task to look at these questions. We developed tasks using computers. I’ll explain what happens in the tests and show you an example of
someone actually taking the test. These shapes represent what
a participant would see on a computer screen. So they’re presented with an image and then they have the real object that they’re physically holding. And they look at two pictures and they have to determine
which one matches the actual physical
object in front of them. And it’s a touchscreen so they can respond by touching the screen. We measure response time, we
can also measure accuracy. How quickly can they make those decisions? Importantly, we are comparing
several different aspects of the mental rotation task. One is we’re looking at the plane used to compare the mental rotation. Is it a horizontal plane or
is it on a different plane? For example if you’re thinking
about a physical table, like maybe a Lazy Susan, when
you think about that rotation on a horizontal plane,
can you mentally represent both sides of that rotation and can you modulate that
visually in your mind. Or similarly, a picture moving on a wall along two different planes of rotation. And also we use two
different types of objects, a simple shape and a human figure. So we wanted to look
at two different areas where are specific effects seen with users of sign language
when using an object versus as a human figure. This is an example of the task. For part of our analysis, we looked at the people who had early
exposure to language and we wanted to look at
ASL signers in this respect. Signers of ASL, native signers of ASL. We also compared them to
children who learned later, at the age of three or six. And we compared them also
to hearing non-signers. What we found is reflected in this graph. Those who learn language
earlier are better at accuracy. So the higher bars
represent greater accuracy. The early learners
overall performed better than those who learned at a later age. We also found effects
of the plane rotation. Overall, signers did
better with rotation tasks that were horizontal than vertical. Also, hearing people did better with horizontal than
vertical rotation tasks. Overall, that was generally true. Oh I’m sorry, let me go back. You’ll also notice here
that early language learners we see no difference between vertical and horizontal plane rotation. Those results are the same. But the late learners do
better with horizontal and not as well with vertical, which mirrors the results
with their hearing peers. Also, all of the groups did
better with human figures than the block task,
which was interesting. Something about the shape
versus the human figure. That result was consistent
among all groups. What does this demonstrate? It demonstrates the importance of early language acquisition
on mental rotation skills. So language and cognition
are influenced by the age of language acquisition. Those who learn earlier perform better than those who learn later, but the deaf children who
learned sign language later still perform better on these tasks than the hearing non-signers. Now to bring this back to the equation that we
originally talked about, what changes over time
as a language emerges? And are there differences across cohorts in their use of space? Here we looked across cohorts of signers and we also tested hearing people. The hearing people were included for the mental rotation
task but not language usage, in terms of sign language obviously. These are the results. Okay, so here you see
the change over time. I’m sorry, this is the
mental rotation task. We see improvement
across cohorts over time with each successive
cohort performing better than the one before. Also we see that all
cohorts perform better than the hearing group. Here we see a different pattern. It depends on the type of rotation task that’s being performed. With the human figure task, you’ll notice the solid bars reflect this and performance improvement
is observed immediately. Cohort 1 signers do much
better than the hearing group in mental rotation tasks
involving human figures. So that happens very, very quickly. For the blocked task,
that’s a simple object not a human figure, we
see that it takes longer for that improvement to
take place over time. We see a different pattern of mental rotation skill improvement, depending on what type of
mental rotation is involved, the block or the human figure. Also, the time at which
a child enters school can impact their mental rotation capacity. So now let’s look at what
changes in a language. Some research has been done about various types of space
used in sign languages. Here we looked at one particular type of space usage that changes over time. So there’s the x-axis that’s
used for verb movement and if a signer wants to describe, for example who’s doing what to whom, for example giving of a book, you could use this axis where
it originates from the signer. I’m the person and the give
motion goes outward from my body across the signing space. That’s what we refer to as the z-axis. Or you could have the verb moving across the x-axis from right to left in the space in front of me, where I establish to reference and the give goes horizontally. So the x-axis or the z-axis can be used. There’s been research and other
types of sign languages too, that use the z-axis very early on in the emergence of the language. There are other languages
where it seems to emerge later. So we wanted to look at
this emergence in NSL. What we did was ask people
to sign different situations. The stimuli involved punching actions and various types of verbs. So let me show you an example of what the signing language looks like. So you notice the clear
z-axis in use here. And here you see of that horizontal axis with body shifting using the x-axis. The use of the x-axis emerged more slowly across the cohorts. Cohort 1 signers do use it, but we see increased usage
across Cohorts 1, 2, and 3. By Cohort 3, the signers are using that much more frequently. Now lastly, I’d like to describe what this data shows and just see if you can
observe the use of x-axis, as well as how that correlates with mental rotation task performance. And I don’t have a
statistical analysis but I want you to pay attention
to those who perform well on mental rotation tasks
versus those who don’t and who tends to use the
x-axis and who does not. This graph shows that
people who tend to do well on mental rotation also use the x-axis. As you can see in these bars. Those who don’t perform as
well on mental rotation tasks also don’t typically use the x-axis. So we see that those two
performances hang together. And this is not concrete evidence for a causal relationship here, but it is something that we, it is consistent with that possibility. So what do we think is
happening here with this cycle? How does spatial language
an entry into access to that language carry over
to skills of spatial cognition and then the interplay, yet again, with the development of the language and the interplay
between those two things? What we see is this trend, the use of the z-axis emerges very early. Mental rotation skill also emerges early for the human figure, specifically, and then later we see the use
of the z-axis or the x-axis and then also mental rotation abilities for other objects, like the block. So we see bi-directionality
in these influences. These emergences and changes happen both when specific features enter the language and how those specific features feed cognitive changes in its users. Those cognitive changes, in turn, may feedback to affect the grammar, especially the spatial
aspects of the language. And what we see here is a
cycle of interdependence. We see both language and cognition are co-organized throughout
the languages development and within individual children. I’d like to thank all of the participants and those who supported my research work and thank you so much for having me. (audience applause) – Thank you for a wonderful talk. We have a couple of minutes for questions and then we’re gonna break
for coffee and then come back. So just a question or two. Do I see questions? Yes. – [Female] Hi there. I just had a question about
students that learn ASL and if you did any
research on the difference if they were mainstreamed
into a public school or if they went to a School for the Deaf and their differences in the recognition of space and movement. – [Interpreter] Yes. I don’t know if specific
research has looked at the content where children
acquire language or the context but in terms of, for example, mainstream schools versus
Schools for the Deaf, I don’t know but certainly the
quality of the language input and the richness of that language input will contribute to spatial
and cognitive development. I don’t know specific
research that looks at that in those different contexts educationally, So I don’t know. – Okay. One more question. – [Female] I’m a grad student at Gallaudet with a Master’s in Sign Language Education and I learned a lot here today. One thing that struck me as
new was the x- and y-axis and I’m wondering how
you came up with that. – [Interpreter] The idea of
looking at the x- and y-axes actually started with research in other sign languages like ABSL and other emerging sign
languages in the world. So what we wondered about
is, would the same be true. So Carol Padden and Mayer and
others were looking at ABSL. So we borrowed their ideas and applied it to Nicaraguan Sign Language to look for the same
patterns of development and in fact we did see more use of the x-axis over time in NSL and spatial organization
changed, too, across the cohorts. – Thank you. So we will have more time
for questions at the end when there is a panel discussion. So is that okay to hold on? Okay, so we’re gonna
take a 15 minute break, no longer than that just to get up. We have some coffee and tea and pastries. And a restroom right there. So, get something to drink quickly and please come back. Okay, excellent. Thank you. (light xylophone music)

Leave a Reply

Your email address will not be published. Required fields are marked *