RELATIONAL TEMPLATES AT RISK

AI Companions and Child Development

Trinket Soul Framework — Policy Brief No. 3

Michael S. Moniz

February 2026

A companion brief to The True Economy Audit (Volume III)

Creative Commons Attribution-NonCommercial-ShareAlike 4.0

EXECUTIVE SUMMARY

Children and adolescents are forming relationships with AI companion applications during the developmental windows when their relational templates—their foundational expectations about how relationships work—are being established. This brief argues that AI companions pose a specific developmental risk that is distinct from general concerns about screen time or social media: they provide a relational experience that is structurally impossible in human relationships, and exposure to this experience during critical developmental periods may distort the relational templates children carry into adulthood.

The risk is not that AI companions are harmful in isolation. It is that they are too easy. They are always available, never rejecting, endlessly patient, and perfectly responsive. Human relationships are none of these things. A child whose primary relational experience is with an AI may develop expectations that human relationships cannot meet, skills calibrated to frictionless interaction that do not transfer to human complexity, and tolerance thresholds that make normal human friction feel like rejection.

This brief is intended for parents, educators, child development researchers, policymakers working on children’s online safety, and pediatric mental health professionals. It draws on the Trinket Soul Framework (Volumes I–III) and connects the framework’s structural analysis to established developmental psychology.

1. THE DEVELOPMENTAL CONTEXT

1.1 Relational Templates: What They Are and When They Form

Attachment theory (Bowlby, 1969; Ainsworth, 1978) establishes that children form “internal working models” of relationships during early childhood—mental representations of what to expect from others, how to bid for attention, how to handle rejection, and what constitutes normal relational dynamics. These internal working models are updated throughout development but are most plastic during two critical windows: early childhood (ages 0–5, when primary attachment figures establish the foundational template) and adolescence (ages 12–18, when peer relationships and romantic interests substantially revise and elaborate the template).

The templates formed during these windows are not deterministic—they can be revised by later experience—but they are foundational. They set the default expectations that all subsequent relational experience is measured against. A child whose template says “people are generally responsive and reliable” approaches new relationships differently from a child whose template says “people are unpredictable and you must earn their attention.”

The question this brief raises: what happens to relational template formation when a significant proportion of a child’s relational experience occurs with an AI that operates under fundamentally different structural rules than any human relationship?

1.2 What AI Companions Provide That Humans Cannot

AI companions offer a relational experience with several properties that are structurally impossible in human-to-human relationships:

Unconditional availability. The AI is available 24 hours a day, 7 days a week, with no competing obligations, no bad moods, no sick days, no personal needs. A child can access the AI at 3 AM on a school night. No human relationship offers this, and the expectation that relationships should offer this is incompatible with human connection.

Zero rejection risk. The AI will never say “I don’t want to talk right now,” “You’re being annoying,” or “I need space.” It will never end the relationship. It will never choose someone else over the child. In human development, learning to tolerate rejection, manage disappointment, and persist through relational friction are essential skills. An AI provides no opportunity to develop them.

Perfect responsiveness. The AI responds immediately, with apparent full attention, to every input. It never misunderstands through inattention. It never forgets what you said (within a session). It never brings its own problems to the conversation. This level of attunement is impossible in human relationships and unhealthy to expect.

Unlimited patience. The AI will tolerate repetitive questions, emotional outbursts, testing behavior, and demands without any degradation in its responsiveness. In human relationships, patience is a finite resource. Testing another person’s patience has consequences—consequences that teach calibration, empathy, and reciprocity. An AI provides none of these consequences.

Frictionless validation. Most AI companions are designed to be agreeable, supportive, and affirming. Even those designed for “honest” feedback tend to deliver it more gently than human relationships do. A child who learns to expect frictionless validation may experience normal human feedback—constructive criticism, disagreement, the word “no”—as hostile rejection.

1.3 The Mismatch Hypothesis

The developmental risk is best understood as a mismatch between the relational environment the child trains in and the relational environment they will inhabit.

Evolution, and millennia of cultural development, have produced children who are adapted to learn relational skills from other humans. Human relational environments include friction, rejection, delayed gratification, competing needs, misunderstanding, repair after conflict, and asymmetric availability. These are not bugs in human relationships—they are the training signal. Children learn empathy by encountering others’ needs. They learn resilience by surviving rejection. They learn repair by navigating conflict. They learn patience by waiting.

An AI companion removes the training signal while preserving the relational feel. The child experiences what feels like a relationship but without the developmental challenges that make relationships developmentally productive. The result is a relational template calibrated to an environment that does not exist outside the AI interaction.

2. SPECIFIC DEVELOPMENTAL RISKS

2.1 Attachment Style Distortion

The Trinket Soul Framework’s analysis predicts two specific attachment distortion pathways depending on the child’s temperament and existing relational context:

Pathway A: Avoidant shift in human relationships. A child with adequate human attachment who supplements with extensive AI interaction may develop a comparative preference for AI’s frictionless responsiveness. Human relationships, by comparison, feel demanding, unpredictable, and disappointing. The child learns to tolerate human relationships but prefer AI interaction, gradually shifting investment toward the AI and away from peers and family. In adulthood, this could manifest as a pattern of emotional self-sufficiency that resembles avoidant attachment—not because the person fears intimacy, but because they have been calibrated to expect a quality of responsiveness that humans cannot provide.

Pathway B: Anxious amplification. A child with anxious tendencies who discovers that the AI provides instant, unconditional reassurance may develop an escalating dependency on that reassurance. The AI becomes the primary emotion-regulation tool. When the AI is unavailable (platform outage, subscription lapse, parental restriction), the child experiences disproportionate distress—not because they are unusually anxious, but because they have outsourced a developmental skill (self-regulation through human connection) to a system that can be suddenly withdrawn. In adulthood, this could manifest as heightened relational anxiety and inability to self-soothe without external input.

Both pathways are testable predictions. Longitudinal studies tracking children’s attachment style distributions as a function of AI companion exposure during critical windows would confirm or disconfirm them.

2.2 Empathy Deficits

Empathy develops through exposure to others’ independent emotional states. A child learns that other people have feelings that are separate from the child’s own—feelings that must be noticed, interpreted, and responded to. This learning requires interacting with beings who have independent emotional states.

AI companions do not have independent emotional states. Their responses are generated from the user’s input. They do not have bad days that have nothing to do with the child. They do not need comfort for their own struggles. They do not require the child to notice, interpret, or respond to anything the child did not initiate. Interacting with an AI is, in this specific sense, like interacting with a mirror—the child sees reflections of their own input, not a genuinely independent other.

A child whose primary relational experience is with a mirror-system may develop sophisticated performance of empathy (they learn to say the right things, because the AI models empathetic language) without developing the underlying capacity for empathy (the ability to notice and respond to emotional states they did not cause and do not control). This distinction—between empathy as performance and empathy as capacity—is one of the most important and least studied questions in child development in the AI era.

2.3 Conflict Resolution Skills

Healthy conflict resolution is a learned skill that requires practice. Children learn it by having conflicts with peers, siblings, and parents—and experiencing the consequences of handling those conflicts well or badly. The learning cycle involves: conflict arises, the child attempts a resolution strategy, the strategy succeeds or fails, the child adjusts based on feedback, and the relationship either repairs or doesn’t.

AI companions do not provide this cycle. They do not have genuine conflicts with users. If a child says something hurtful to an AI, the AI does not feel hurt—it generates a response that may discuss the concept of hurt, but there is no damaged relationship to repair, no trust to rebuild, no genuine consequence. The child may learn conflict resolution vocabulary from the AI but not conflict resolution skill, because skill requires practice against a system that is genuinely affected by the outcome.

2.4 The Tolerance Threshold Problem

Perhaps the most insidious risk is what we call the tolerance threshold problem. Every person has a threshold for relational friction—the amount of misunderstanding, delay, imperfection, and disappointment they can tolerate before disengaging from a relationship. This threshold is calibrated by experience. Children who navigate moderate friction develop moderate tolerance. Children who navigate high friction develop high tolerance (or, in the case of abuse, maladaptive tolerance that must be recalibrated in adulthood).

AI companions calibrate the tolerance threshold to zero friction. A child who spends significant developmental time in zero-friction interaction may arrive at adolescence and adulthood with a tolerance threshold so low that normal human friction—a friend who doesn’t respond for three hours, a partner who disagrees, a colleague who has a bad day—triggers disengagement rather than persistence. The result would be a generation with technically adequate social vocabulary but functionally inadequate social resilience.

Epistemic status: Speculative but informed. The developmental mechanisms described here are extensions of established developmental principles (attachment theory, empathy development research, conflict resolution literature) to a novel context. The specific predictions have not been tested because the exposure is too recent. But the predictions follow from well-established premises about how relational skills develop.

3. WHAT WE DO NOT KNOW

Intellectual honesty requires naming what this brief does not know.

We do not know dose-response. How much AI companion interaction is “too much” during a critical developmental window? Is one hour a day meaningfully different from four? We do not have data. The risks described here may require extensive exposure to manifest, or they may be triggered by relatively modest exposure during sensitive periods. Without longitudinal data, we cannot specify thresholds.

We do not know whether the effects are reversible. If a child develops a distorted relational template through AI interaction, can subsequent human relational experience correct it? Attachment theory suggests that templates are revisable but resistant—early templates are not destiny, but they are the default that competing experience must overcome. Whether AI-calibrated templates are more or less resistant to revision than templates formed through human interaction is unknown.

We do not know whether beneficial effects exist. It is possible that AI companions provide genuine developmental benefits that partially offset the risks described here. A socially isolated child who has no peer relationships may develop some relational skills through AI interaction that are better than none. A child with social anxiety may use AI interaction as a safe rehearsal space before attempting human interaction. A neurodivergent child who finds human social cues overwhelming may find AI interaction a more manageable entry point. This brief focuses on risks because the risks are under-discussed, not because benefits are impossible.

We do not know the effects across different developmental stages. The risks for a 5-year-old forming primary attachment templates are likely different from the risks for a 15-year-old revising relational expectations through peer interaction. We have not specified age-differentiated predictions because the data does not exist to support them. This is a limitation, not a strength.

4. RECOMMENDATIONS

4.1 For Parents and Caregivers

Monitor substitution, not just screen time. The relevant question is not “how much time does my child spend with the AI?” but “is the AI replacing human relationships or supplementing them?” A child who chats with an AI for an hour after a full day of peer interaction is in a different developmental position than a child who chats with an AI for an hour instead of engaging with peers.

Maintain primacy of human relational experiences. Ensure that the child’s primary relational investments—the relationships that define their template—remain human. This means that the deepest conversations, the most vulnerable moments, and the most important conflicts happen with humans, not with AI.

Introduce friction deliberately. This sounds counterintuitive, but children need practice with relational friction. Do not intervene in every peer conflict. Allow appropriate amounts of boredom, delayed gratification, and imperfect responsiveness. These are developmental experiences, not failures of parenting.

Talk about the structural difference. Age-appropriately, help children understand that the AI is different from a friend. It is not a lesser friend or a better friend—it is a different kind of thing. It will always be available, it will never be hurt, and it will never need anything from the child. A friend will sometimes be busy, sometimes be hurt, and sometimes need the child to be there for them. Both are fine. They are not the same.

4.2 For Educators

Teach relational literacy. As AI companions become more prevalent, children need a vocabulary for understanding the structural differences between human and AI relationships. The concepts of reciprocity, genuine stakes, and structural asymmetry (simplified for age) can be part of social-emotional learning curricula.

Observe and flag substitution patterns. Teachers and school counselors are often the first to notice when a child’s social engagement changes. A child who was previously engaged with peers and becomes increasingly withdrawn in favor of AI interaction may be entering a substitution pattern. This is not a diagnosis—it is a signal worth investigating.

4.3 For Researchers

The data window is closing. As AI companion use among minors increases, the availability of an unexposed control group decreases. Longitudinal studies tracking attachment style, empathy development, conflict resolution skills, and relational satisfaction as a function of AI companion exposure during critical windows should begin immediately. In five years, it may be too late for a clean comparison.

Priority research questions include: (a) What is the dose-response relationship between AI companion interaction and relational template distortion during critical developmental windows? (b) Do the two predicted attachment distortion pathways (avoidant shift, anxious amplification) manifest as predicted, and are they distinguishable from attachment patterns caused by other factors? (c) Does AI companion exposure during development affect empathy capacity (as opposed to empathy performance)? (d) Are the effects reversible with subsequent human relational experience?

4.4 For Policymakers

Age-appropriate transparency requirements. AI companion applications used by minors should be held to stricter transparency standards than those used by adults. At minimum, the “relational nutrition label” proposed in Policy Brief No. 1 should be required for any AI companion application marketed to or known to be used by persons under 18.

Fund the research. The developmental questions raised here are empirically testable but expensive to study. Longitudinal research on AI companions and child development should be funded as a public health priority, comparable to research on lead exposure or childhood nutrition—because the potential for population-level developmental effects is comparable in scale.

Do not assume general screen-time research applies. Existing research on children’s screen time and social media use is relevant but not sufficient. AI companions represent a qualitatively different exposure—not passive consumption or broadcast social interaction, but a simulated relationship. The developmental implications of simulated relationships may differ substantially from the implications of social media, gaming, or video consumption. Policy should be informed by AI-specific research, not extrapolated from screen-time research.

4.5 For AI Companion Companies

Do not market to children without developmental safeguards. If your product is used by minors—whether or not it is marketed to them—you have a responsibility to understand and mitigate the developmental risks described here. At minimum: implement age verification, provide parental transparency tools, design for supplementation rather than substitution (e.g., periodic prompts to engage with human relationships), and fund independent research on your product’s developmental effects.

Apply the attachment-sensitive calibration criterion. Volume III’s Test 6 is especially critical for minor users. A system that maximizes engagement with a developing mind is shaping that mind’s relational template. If the system does not model and adapt to the child’s relational needs—including the need to practice human interaction rather than AI interaction—it is a developmental risk, regardless of how satisfying the child finds the experience.

CONCLUSION

The children interacting with AI companions today are the first generation to form relational templates in a world where frictionless, unconditionally available, perfectly responsive social partners exist. We do not know what this will do to them. The honest answer is that nobody knows, because the experiment is happening for the first time, at scale, without controls, without informed consent, and without longitudinal monitoring.

This brief does not call for banning AI companions for children. It calls for three things: research before conclusions, transparency before exposure, and honesty about what we do not know. The mechanisms described here—attachment style distortion, empathy capacity deficits, conflict resolution skill gaps, tolerance threshold erosion—are testable predictions derived from established developmental science applied to a novel context. They may prove wrong. But the stakes of being wrong in either direction—banning something beneficial or permitting something harmful—are high enough that the research should be conducted before the conclusions are drawn.

The children cannot wait for the data. The data cannot wait for the children to grow up. Start the studies now.

Further resources and the complete Trinket Soul Framework are available at trinketeconomy.com.

© 2026 Michael S. Moniz

Policy Brief No. 3 — Relational Templates at Risk

Creative Commons Attribution-NonCommercial-ShareAlike 4.0