Pre-conference events

This year, we have two tutorials and one workshop, all taking place in parallel on the morning of Thursday 15 January (including a catered coffee break).

There is an additional 40 EUR fee to attend pre-conference events. The events have limited capacity and spots will be given on a first-come, first-served basis. Those presenting in a pre-conference workshop or tutorial are not required to pay the additional fee.

Tutorial 1: Thinking Like a Baby: Using Concept Mapping to Build an Infant Cognition Framework
by Lorraine Afflitto and Gen Tsudaka with invited speaker Ruthe Foushee (The Center for Research with Infants and Toddlers at The New School for Social Research)

Thursday, 15 January 2026, 8:30 – 12:30

Despite encouragement to “think like a baby” (Zetterstein et al., 2025) and our own desire to do so, researchers still face a central challenge: What does it truly take to think like an infant? We wrestle with this question as we attempt to see the world through an infant’s eyes, uncover their intuitive responses, trace emerging concepts, and follow the unfolding of developmental trajectories.

The "Thinking Like a Baby "tutorial is inspired by nursing’s commitment to patient-centered care, where care is guided by patients' responses—essentially, "thinking like a patient." Nurses rely on established frameworks, such as the NANDA nursing diagnosis model, to interpret experiences. The process is straightforward, from patient data collection to data visualization via care-relevant domain concept mapping, and finally to interpretation. Interpretations are called nursing diagnoses- a clinical judgment about a patient's response to an actual or potential health problem or life process.

Building on this approach, the "Thinking Like A Baby" tutorial proposes a bottom-up, data-driven approach to constructing an infant cognition framework. By mapping empirical data onto cognitive constructs and their corresponding mental representations, participants “think like a baby.” In turn, they can explain/predict infant behavior with a judgment based on ontogenetic, epistemological, and empirical considerations, of an infant's intuitive reaction or tendency (predisposition) to respond to environmental or lab-based stimuli."

Agreeing with Cusack et al. (2024), infants are active, goal-directed agents who engage with their environment and structure their own cognitive frameworks. In synchrony with the infant’s own cognitive framework building, our framework affords researchers the opportunity to identify relationships among constructs, uncover gaps in the literature, and generate new, testable hypotheses grounded in the infant experience.

Researchers across career stages in cognitive developmental psychology/cognitive science who conduct preverbal infant research and want a practical, shareable, standardized methodology that targets the infant experience. Early career and graduate student researchers interested in exploring a novel approach to data schematics, construct development, measures and hypotheses.

Learners will be able to:

  1. Understand the process behind creating an Infant Cognition Framework grouped by specific developmental cognitive constructs and their respective mental representations.
  2. Identify the cognitive constructs that are meaningful and broadly applicable in infant cognition research (e.g. core knowledge domains, naive/innate/lay theories, and language and communication).
  3. Understand how infants might mentally represent cognitive constructs within each domain (e. g. thick vs thin relationships within the core knowledge social groups cluster (Thomas, et al., 2020); and word–referent mappings that extends beyond direct caregiver–infant interaction with the language and communication cluster (Foushee & Srinivasan, 2024)).
  4. Explain the research tools, steps, and strategies that guide this framework-building — including data collection, concept mapping, interpretation, and hypothesis generation.
  5. Apply all of the above in creating an Infant Cognition Framework for developmental research.

This highly interactive session blends didactic and hands-on learning- anchored in real-world examples. Participants learn about each element of the methodological strategy in brief lectures or discussions and immediately apply that knowledge through small-group activities. Modeled on the NANDA nursing diagnosis framework, which nurses use to systematically interpret and respond to the patient experience, this program introduces a structured approach for understanding infant cognition.

8:30 - 9:00 AM Registration

9:00 - 9:15 AM Welcome and Introductions

9:15 - 10:00 AM Interactive lecture
Translating the NANDA Model: Concept Mapping from Data to Cognitive Constructs to Mental Representations
Here we introduce an adaptation of the NANDA nursing diagnosis framework for infant cognition. Participants learn how to use concept mapping to connect data, cognitive constructs, and mental representations within a structured, infant-centered framework.

10:00 - 10:20 AM Coffee break

10:20 - 12:00 noon Hands-on learning
Framework Development Lab: Mapping Data to Constructs and Mental Representations Participants apply concept mapping to translate data into cognitive constructs and mental representations, collaboratively building an infant-centered framework.

12:00 - 12:30 PM Debrief
The program concludes with a debrief led by Ruthe Foushee, where participants reflect on their learning and explore implications for infant cognition research and practice.

12:30 PM Adjourn

Workshop 2: The format and structure of thought in the developing mind
by Barbara Pomiechowska (University of Birmigham), Eric Mandelbaum (City University of New York)

Thursday, 15 January 2026, 8:30 – 12:30

What is it to think like a baby? What are the building blocks of the human mind that predate the mastery of language? What kinds of mental representations and structures are available in infancy to support learning, communication, and inference? How do they change over developmental time?

Evidence from developmental cognitive science shows that human infants are able to set up discrete structured representations, which support object and numerosity tracking, action interpretation, and language acquisition. Even though some of the representational processes and structures involved have been well described (e.g. object files, core concepts), their format and functions have not been fully understood (e.g. when an infant will deploy a concept vs. an object file vs. both; what formats object files and core cognition take). Others remain elusive and their availability in early development is debated (e.g. symbols, logical operators). It is also an open question when and how infants begin to combine these representations into complex thoughts.

Goals. The goals of this workshop are to (1) review the latest empirical evidence and theoretical trends, (2) connect different lines of theory and empirical enquiry, (3) brainstorm how to tackle the outstanding questions in theoretically and empirically fruitful ways.

Overview. The workshop will open with an introductory talk highlighting key themes and questions. Then, across five talks, speakers will explore different formats and structures of thought. The workshop will conclude with a panel discussion that will tie together different perspectives and explore directions for future research.

9:00 From format to architecture, Eric Mandelbaum (CUNY)

9:20 Symbolic depictions, Gergely Csibra (CEU)

9:40 Representing relations, Jean-Remy Hochmann (CNRS)

10:00 Generalization in language and thought, Marjorie Rhodes (NYU)

10:20 coffee break

10:45 On the minimal format of object representations, Melissa Kibbe (Boston University)

11:05 Compositionality and combinatorial thought, Barbara Pomiechowska (University of Birmingham)

11:25 Discussion by Susan Carey (CUNY)

11:45 Open discussion

The program ends at 12:15.

Tutorial 3: Hands-on Generalised Mixed-Effects Models in R
by Francesco Poli (MRC Cognition and Brain Sciences Unit, University of Cambridge)

Thursday, 15 January 2026, 8:30 – 12:30

Developmental and comparative data are often messy: infants and children respond differently to the same stimuli, fatigue over trials, and outcomes are frequently non-normal (right-skewed reaction times, binary choices, etc.). Standard t-tests/ANOVAs can miss these complexities. A modern workflow treats linear models (LMs), linear mixed-effects models (LMMs), and generalised linear mixed-effects models (GLMMs) as a continuum: start simple, then add the structure your data demand (nesting, random effects) and the distribution your outcome actually follows (e.g., Gamma for RTs, binomial for accuracy).

Building on DevStart’s core principles of free, open-source, collaborative, and practical science, we assume minimal prior knowledge and learn by doing.

All examples use R/RStudio with transparent code that attendees can lift into their own projects. We work with an eye-tracking dataset, but the workflow generalises to EEG, accuracy/choice, and count data. By the end, participants can fit, check, interpret, and visualise GLMMs that are publication-ready, and, crucially, understand why those choices make sense for developmental science.

We follow a hands-on approach in which we engage in live coding in R from scratch. We will have micro-exercises (2–6 minutes) that we will solve together with a structure: “run → tweak → interpret.” We will think-aloud together and decide next steps. A the end, attendees can bring a question/design; we map it to a GLMM template. Everything will be accessibile, with plain-language explanations before math, and all code shared for reuse.

By the end of the tutorial, participants will be able to:

  • Diagnose what statistical model is preferable for the specific data they have
  • Choose appropriate families and links for GLMMs.
  • Specify random effects (intercepts/slopes; nesting vs. crossing) that match their design.
  • Run GLMMs with glmer() and extract, visualise, and explain effects using modelbased/emmeans (predictions, estimated means, contrasts, slopes).
  • Check assumptions and fit with performance::check_model(), compare models (AIC), and report results clearly and reproducibly.
  • Implement an end-to-end, open, reusable analysis pipeline for their own datasets.

Prerequisites: Basic familiarity with R is helpful but not required.

Participants bring: Laptop with R and RStudio installed (instructions will be circulated before the tutorial); headphones.

We follow a hands-on approach in which we engage in live coding in R from scratch. We will have micro-exercises (2–6 minutes) that we will solve together with a structure: “run → tweak → interpret.” We will think-aloud together and decide next steps. A the end, attendees can bring a question/design; we map it to a GLMM template. Everything will be accessibile, with plain-language explanations before math, and all code shared for reuse.

08:30-09:00 Why GLMMs on top of t-tests/ANOVAs?
When Gaussian fails; outcomes & links; nesting and random effects (quick LM→LMM→GLMM staircase).

09:00-09:30 Fitting your first GLMM
glmer() syntax; choosing Gamma/log for RTs; convergence tips.

09:30-10:00 Random-effects structures
Intercepts vs. slopes; model comparison (AIC).

10:00-10:30 Coffee break (flexible based on actual final schedule)

10:30-10:45 Diagnostics
performance::check_model() to check assumptions of linear models

10:45-11:30 Telling the story
Predictions, estimated means, contrasts, and slopes (modelbased/emmeans); publication-ready plots.

11:30-12:15 Mini-clinic & wrap-up
Map attendee designs to GLMM templates (binary accuracy, counts); Open questions.