International Centre for Language and Communicative Development: Training Study - Children's Acquisition of Complex Questions, 2017-2018

DOI

A central question in language acquisition is how children master sentence types that they have seldom, if ever, heard. Here we report the findings of a preregistered, randomized, single-blind intervention study designed to test the prediction that, for one such sentence type, complex questions (e.g., Is the crocodile who’s hot eating?), children could combine schemas learned, on the basis of the input, for complex noun phrases (the [THING] who’s [PROPERTY]) and simple questions (Is [THING] [ACTION]ing?) to yield a complex-question schema (Is [the [THING] who’s [PROPERTY]] ACTIONing?). Children aged 4;2 to 6;8 years (M=5;6, SD= 7.7 months) were trained on simple questions (e.g., Is the bird cleaning?) and either (Experimental group, N=61) complex noun phrases (e.g., the bird who’s sad) or (Control group, N=61) matched simple noun phrases (e.g., the sad bird). In general, the two groups did not differ on their ability to produce novel complex questions at test. However, the Experimental group did show (a) some evidence of generalizing a particular complex NP schema (the [THING] who’s [PROPERTY] as opposed to the [THING] that’s [PROPERTY]) from training to test, (b) a lower rate of auxiliary-doubling errors (e.g., *Is the crocodile who’s hot is eating?) and (c) a greater ability to produce complex questions on the first test trial.The International Centre for Language and Communicative Development (LuCiD) will bring about a transformation in our understanding of how children learn to communicate, and deliver the crucial information needed to design effective interventions in child healthcare, communicative development and early years education. Learning to use language to communicate is hugely important for society. Failure to develop language and communication skills at the right age is a major predictor of educational and social inequality in later life. To tackle this problem, we need to know the answers to a number of questions: How do children learn language from what they see and hear? What do measures of children's brain activity tell us about what they know? and How do differences between children and differences in their environments affect how children learn to talk? Answering these questions is a major challenge for researchers. LuCiD will bring together researchers from a wide range of different backgrounds to address this challenge. The LuCiD Centre will be based in the North West of England and will coordinate five streams of research in the UK and abroad. It will use multiple methods to address central issues, create new technology products, and communicate evidence-based information directly to other researchers and to parents, practitioners and policy-makers. LuCiD's RESEARCH AGENDA will address four key questions in language and communicative development: 1) ENVIRONMENT: How do children combine the different kinds of information that they see and hear to learn language? 2) KNOWLEDGE: How do children learn the word meanings and grammatical categories of their language? 3) COMMUNICATION: How do children learn to use their language to communicate effectively? 4) VARIATION: How do children learn languages with different structures and in different cultural environments? The fifth stream, the LANGUAGE 0-5 PROJECT, will connect the other four streams. It will follow 80 English learning children from 6 months to 5 years, studying how and why some children's language development is different from others. A key feature of this project is that the children will take part in studies within the other four streams. This will enable us to build a complete picture of language development from the very beginning through to school readiness. Applying different methods to study children's language development will constrain the types of explanations that can be proposed, helping us create much more accurate theories of language development. We will observe and record children in natural interaction as well as studying their language in more controlled experiments, using behavioural measures and correlations with brain activity (EEG). Transcripts of children's language and interaction will be analysed and used to model how these two are related using powerful computer algorithms. LuciD's TECHNOLOGY AGENDA will develop new multi-method approaches and create new technology products for researchers, healthcare and education professionals. We will build a 'big data' management and sharing system to make all our data freely available; create a toolkit of software (LANGUAGE RESEARCHER'S TOOLKIT) so that researchers can analyse speech more easily and more accurately; and develop a smartphone app (the BABYTALK APP) that will allow parents, researchers and practitioners to monitor, assess and promote children's language development. With the help of six IMPACT CHAMPIONS, LuCiD's COMMUNICATIONS AGENDA will ensure that parents know how they can best help their children learn to talk, and give healthcare and education professionals and policy-makers the information they need to create intervention programmes that are firmly rooted in the latest research findings.

A preregistered sample size of 122 children (61 per group, randomly allocated) was determined on the basis of a power calculation with d=0.3, power = 0.5, on the basis of a between-subjects t-test (using GPower 3.0). Although our analysis plan actually specified the use of mixed-effects models, it is not possible to run a power analysis for such models without simulated data, and we were not aware of any findings from studies with sufficiently similar methods to form the basis for such a simulation. Although a power greater than 0.5 would have been desirable, a total sample size of 122 was our maximum in terms of time, funding and personnel. We go some way towards mitigating this problem by also including a supplementary, exploratory Bayesian analysis (the decision to add this analysis was taken after the main results were known). A total of 143 children completed the experiment, but 21 were excluded (9 from the Experimental group and 12 from the Control group) for failing to meet the preregistered training criteria set out below. Children were recruited from UK Reception (aged 4-5 years) and Year 1 (5-6 years) classes. The final sample ranged from 4;2 to 6;8, M=5;6, SD= 7.7 months, Experimental group = M=64.85 months, SD=7.93; Control group = M=66.54 months, SD=7.44) Before training, all participants completed the Word Structure test from the fifth edition of the CELF-Preschool 2 UK (Wiig, Secord & Semel, 2004). This is a production test of morphosyntax, in which children are asked to complete sentences to describe pictures (e.g., Experimenter: This girl is climbing. This girl is… Child: Sleeping). The purpose of this test was to allow us to verify that the Experimental and Control groups were matched for general ability with morphosyntax. This was found to be the case (Experimental: M=19.42, SD=3.05; Control: M=19.95, SD=2.79). We did not include a baseline measure of complex-question production because we did not want to give children practice in producing these questions, since our goal was to investigate the impact of relevant training on children who had previously heard no – or extremely few – complex questions. All participants completed five training sessions on different days. As far as possible, children were tested on five consecutive days, but sometimes this was not possible due to absence. The total span of training (in days) for each child was included as a covariate in the statistical analysis. Each daily training session comprised two sub-sessions: Noun-Phrases and simple yes/no questions, always in that order. The CELF was presented immediately before the first training session on Day 1; The complex-question test session immediately after the final training session on Day 5. Noun-phrase (NP) training. The aim of this part of the session was to train children in the Experimental group on complex noun phrases (e.g., the bird who’s happy), resulting in the formation of a complex-noun-phrase schema (the [THING] who’s [PROPERTY]) that could be combined with a simple question schema (Is [THING] [ACTION]ing?) to yield a complex-question schema (Is [the [THING] who’s [PROPERTY]] ACTIONing?). On each day, children in the Experimental group heard the experimenter produce 12 such complex noun phrases, and heard and repeated a further 12 such phrases. NP training took the form of a bingo game, in which the experimenter and child took turns request cards from a talking dog toy, in order to complete their bingo grid, with the experimenter helping the child by telling her what to say. The dog’s responses were structured such that the child always won the bingo game on Days 1, 3 and 5, and the experimenter on Days 2 and 4, resulting in an overall win for the child. In order to provide pragmatic motivation for the use of complex noun phrases (e.g., the bird who’s sad, as opposed to simply the bird), the bingo grid contained two of each animal, with opposite properties (e.g., the bird who’s happy vs the bird who’s sad; the chicken who’s big vs the chicken who’s small), requested on subsequent turns by the child and the experimenter. Two different versions of the game were created, with different pairings of animals and adjectives, the first used on Days 1, 3 and 5, the second on Days 2 and 4. The allocation of NPs to the experimenter versus the child, and the order of the trials was varied within each version, but was not subject to any between-subjects variation: Within a particular group (Experimental/Control) all children had identical training. Children in the Control group received similar training to the Experimental group, except that instead of complex NPs (e.g., the bird who’s happy), they heard and repeated semantically-matched simple adjectival NPs (e.g., the happy bird). Simple-question training. The aim of this part of the session was to train children on simple questions (e.g., Is the bird cleaning?), resulting in the formation of a simple question schema (Is [THING] [ACTION]ing?) that children in the Experimental group – but crucially not the Control group – could combine with the trained complex-noun-phrase schema (the [THING] who’s [PROPERTY]) to yield a complex-question schema (Is [the [THING] who’s [PROPERTY]] ACTIONing?). Simple question training was identical for the Experimental and Control groups, and took the form of a game in which the child repeated questions spoken by the experimenter, subsequently answered by the same talking dog toy from the NP training part of the session. The experimenter first explained that “We are going to ask the dog some questions. We’ll see an animal on the card and try to guess what the animal is doing on the other side of the card”. On each trial, the experimenter first showed the face of the card depicting the animal doing nothing in particular and said, for example, “On this one, here’s a bird. I wonder if the bird is cleaning. Let’s ask the dog. Copy me. Is the bird cleaning”. After the child had attempted to repeat the question, the dog responded (e.g., “No, he’s having his dinner”), and the experimenter turned the card to show an illustration depicting the answer. As for the NP training, two different versions of the game were created, with different pairings of animals and actions, the first used on Days 1, 3 and 5, the second on Days 2 and 4, with the order of presentation varied within each version. All children, regardless of group, had identical simple-question training. Note that, in order to encourage schema combination, an identical set of animals featured in the NP (e.g, the bird who’s sad) and complex-question training (e.g., is the bird cleaning?). Test phase: complex questions. The aim of the test phase was to investigate children’s ability to produce complex questions (e.g., Is the crocodile who’s hot eating?) by combining trained complex-NP and simple-question schemas ((Is [the [THING] who’s [PROPERTY]] ACTIONing?). Because we were interested in training an abstract schema, rather than particular lexical strings, the target complex questions for the test phase used only animals, verbs and adjectives that were not featured during training. The game was very similar to that used in the simple-question training, except that children were told “This time you are not going to copy me. I will tell you what to ask, and you can ask the dog”. For each trial, the experimenter held up the relevant card and said (for example). “Two crocodiles: hot and cold [points to each crocodile; one wearing swimwear on a beach; the other wearing winter clothing in snow]. I wonder if the crocodile who’s hot is eating. Ask the dog if the crocodile who’s hot is eating”. Note that this prompt (equivalent to that used in Ambridge et al, 2008) precludes the possibility of the child producing a well-formed question simply by repeating part of the experimenter’s utterance. As before, the dog then answered (e.g., “Yes, he’s having his breakfast”), and the experimenter turned the card to show the relevant animation. Each child completed 12 test trials in random order. In order to ensure that both the Experimental and Control groups were made up of children who had successfully completed the training, we followed our preregistered exclusion criteria, which specified that “any child who does not correctly repeat at least half of the noun phrases and at least half of the questions on all five days will be excluded… All children who complete the training and test to criterion (outlined above) will be included, and any who do not will be replaced”. On this criterion, we excluded 21 children. All participants produced scorable responses for all trials, with no missing data (i.e., all responses were clearly some attempt at the target question). Presumably this was due to our extensive training and strict exclusion criteria which ensured that children were competent and confident in putting questions to the talking dog in response to prompts from the experimenter. Responses were coded according to the scheme s, which also shows the number of responses in each category, for each group. In order to check reliability, all responses were independently coded by two coders: the first and final author. At the first pass, the coders showed 100% agreement with regard to the classification of responses as correct (1) or erroneous (0), with the only disagreements relating to the classification of error types (84 cases for an overall agreement rate of 94.3%). All of these discrepancies related to ambiguities in the coding scheme and, following discussion, were eliminated for 100% agreement.

Identifier
DOI https://doi.org/10.5255/UKDA-SN-853879
Metadata Access https://datacatalogue.cessda.eu/oai-pmh/v0/oai?verb=GetRecord&metadataPrefix=oai_ddi25&identifier=2392ee97cd6a7acd9a65446409b9ac45f77af9725006f13c1fcc98a395a2f48e
Provenance
Creator Ambridge, B, University of Liverpool; Gummery, A, University of Liverpool; Rowland, C, Max Planck Institute for Psycholinguistics
Publisher UK Data Service
Publication Year 2021
Funding Reference Economic and Social Research Council
Rights Ben Ambridge, University of Liverpool; The Data Collection is available to any user without the requirement for registration for download/access.
OpenAccess true
Representation
Resource Type Numeric; Text; Still image; Audio; Software
Discipline Humanities; Linguistics; Psychology; Social and Behavioural Sciences
Spatial Coverage United Kingdom