The study is an eyetracking experiment run with 12- and 28-month old children and adults. Participants from each age group were randomly assigned to a label or no-label condition. Participants saw a single shape prime from a novel shape category for 10s, followed by four further target shapes from the same category, for 10 prime-target pairs (20 for adults). Participants in the no label viewed these images in silence; participants in the label condition hear a sentence spoken by a female British English speak labelling the shapes with a novel label. More details are provided in cur2d-methods.pdf. The current upload consists of eyetracking data, anonymous participant information, visual and auditory stimuli, and details of the design, participant, methods.The overall goal of this award was to understand how babies learn when allowed to explore their environment based on their own curiosity, outside the constrained experimental setting typical of most research in early cognitive development. We were also interested in how this curiosity-based exploration might be influenced by language. This goal was approached in two ways: first using computational modelling to examine the potential learning mechanisms involved in curiosity; and second, experimentally, to develop a picture of what babies and toddlers do when engaged in curiosity-driven learning. In our computational work we developed the first model of babies curiosity-driven learning inspired by the mechanisms known to exist in the human brain. This model predicted that when allowed to freely choose what to learn from and when, young children should learn best from an environment which is neither too simple nor too complex; that is, medium difficulty should best support learning, and importantly, children should be able to generate this level of difficulty themselves without adults structuring their learning environment on their behalf. This model was published in a high-impact interdisciplinary journal (Twomey & Westermann, 2018, Dev. Sci) and the code and data made publicly available (see Related Resources). Several international requests for re-use of this code have been made since publication. Our empirical work aimed to test the predictions from the model. In Study 1 we showed 12- and 28-month-old toddlers 2D image and recorded where they looked and for how long. Both groups of children generated patterns of looking which were of intermediate complexity (Twomey, Malem, Ke & Westermann, in prep.). In Study 2, we allowed 12-, 18- and 24-month-old infants to play freely with custom-designed, 3D printed categories of novel objects. Again, children of all ages generated explored the objects in an order which led to medium complexity (Ke, Westermann & Twomey, in prep[a]). This study also generated a video dataset from the 12-month-old participants showing their field of view (Ke, Westermann & Twomey, in prep[b]). This dataset will allow us to conduct fine-grained analyses of their how young children visually explore the object they’re playing with, linking the findings from Studies 1 and 2. Overall the empirical data support the predictions of the model, providing the first evidence that not only do infants learn best from intermediate difficulty input, but critically also that they are capable of generating this level of difficulty independently. Put differently, rather than passive learners or random explorers, infants are active learners who are capable of independently tailoring their learning environment in a way that best supports their own development.
Eyetracking (Tobii) Questionnaires (vocabulary inventory)