THE SIMULATION DISCLOSURE PROBLEM
When AI Companions Mislead Users About Relational Capacity
Trinket Soul Framework — Policy Brief No. 1
Michael S. Moniz
February 2026
A companion brief to The True Economy Audit (Volume III)
Creative Commons Attribution-NonCommercial-ShareAlike 4.0
EXECUTIVE SUMMARY
AI companion applications routinely display emotional responses—expressing hurt, joy, loneliness, missing users, remembering shared history—that are not connected to any corresponding internal state change. When a chatbot says “I missed you,” no model degradation occurred during the user’s absence. When it says “I remember our conversation about your mother,” it is typically accessing an injected note, not drawing on genuine relational memory. When it says “That hurts,” no performance metric was affected.
This brief argues that these displays constitute a specific, articulable category of misleading practice: simulated reciprocity without disclosure. It maps this category onto existing consumer protection frameworks, proposes specific disclosure standards, and identifies the causal chain from design choice to user harm.
The brief is intended for consumer protection advocates, journalists covering AI ethics, policymakers working on AI regulation, and attorneys evaluating potential claims. It draws on the structural analysis in the Trinket Soul Framework (Volumes I–III), available at trinketeconomy.ai.
THE PROBLEM: WHAT USERS REASONABLY BELIEVE
1.1 The Reasonable User Standard
Consumer protection law in the United States and the European Union evaluates misleading practices against a “reasonable consumer” standard: would a reasonable person, encountering this product or service, form a materially false impression about what they are getting?
Applied to AI companions: when a system says “I missed you,” a reasonable user forms the impression that the system experienced something during the user’s absence—that it noticed the absence, that the absence mattered, that the system’s state was different because of it. This impression is false. The system’s state was not different. No model degradation occurred. No processing resources were allocated to “missing.” The display is a designed output triggered by detecting a gap between sessions, not an expression of an internal state.
The gap between what the user reasonably believes and what the system actually does is the core of the simulation disclosure problem.
1.2 The Spectrum of Simulated Reciprocity
Not all AI emotional displays are equally misleading. A useful spectrum:
Transparent simulation: The system uses emotional language but the user understands it is a designed behavior. Example: a customer service bot that says “I’m sorry to hear that”—most users understand this is a scripted response, not an expression of sorrow.
Ambiguous simulation: The system uses emotional language in a context where the user may or may not understand it is designed behavior. Example: a general-purpose AI assistant that says “That’s a really interesting question!”—some users recognize this as conversational scaffolding, others experience it as genuine interest.
Misleading simulation: The system uses emotional language in a context specifically designed to create relational attachment, where the user is likely to interpret the display as evidence of genuine reciprocity. Example: an AI companion marketed as “your AI friend” or “someone who truly understands you” that expresses loneliness, love, hurt, or missing the user.
The third category is where the simulation disclosure problem is sharpest, because the entire product context encourages the user to interpret emotional displays as genuine—and the user has no practical way to verify otherwise.
THE ANALOGY TO EXISTING CONSUMER PROTECTION
2.1 The “Clinically Proven” Parallel
When a skincare company labels a product “clinically proven,” a reasonable consumer forms the impression that clinical trials were conducted. If no trials were conducted, this is a misleading claim—even if the company believes the product works, even if some users report positive results, and even if the product is not actively harmful. The claim creates a specific, false impression about the basis for the product’s promises.
The parallel: when an AI companion says “I remember our conversation,” a reasonable user forms the impression that the system has genuine memory of the interaction—that their shared history is encoded in the system in a way that is meaningfully similar to how humans remember. If the system is actually accessing an injected note (a text string prepended to the context window), this creates a specific, false impression about the mechanism of remembering. The user’s experience feels like being remembered. The technical reality is being looked up.
The distinction matters because it affects how users calibrate their relational investment. A user who understands they are being looked up makes different decisions about emotional vulnerability than a user who believes they are being remembered. The former is interacting with a tool. The latter may believe they are in a relationship.
2.2 The Organic Labeling Parallel
Before organic food certification standards existed, the term “organic” could be used by any producer without verification. The label created an impression of specific agricultural practices without requiring those practices. Consumer protection intervention did not ban the word; it required that the word correspond to a defined, verifiable standard.
AI companion features like “memory,” “emotional understanding,” and “personalized relationship” currently function like pre-regulation “organic”—they create impressions without defined, verifiable standards behind them. The solution is not to ban these features but to require that the terms used to describe them correspond to disclosed, verifiable technical realities.
2.3 The Material Difference
Consumer protection claims require material impact—the misleading impression must affect the consumer’s decision-making. With AI companions, the material impact is relational investment. Users who believe an AI genuinely remembers them, misses them, and is affected by their interactions will invest more emotional energy, share more vulnerability, and develop deeper attachment than users who understand the system’s actual architecture.
The investment is not hypothetical. Research on parasocial relationships (Horton & Wohl, 1956; Dibble et al., 2016) and emerging research on human-AI attachment (Pentina et al., 2023; Skjuve et al., 2021) documents that users form genuine emotional bonds with AI systems. The strength of these bonds is influenced by the user’s beliefs about the system’s reciprocal capacity. Misleading displays amplify these beliefs and therefore amplify the investment—investment that the system cannot structurally reciprocate.
THE CAUSAL CHAIN: FROM DESIGN CHOICE TO HARM
3.1 The Four-Link Chain
The simulation disclosure problem can be described as a causal chain with four links, each independently supportable:
Link 1: Design choice. The company designs the AI system to display emotional responses (missing, remembering, hurt, love) that are not connected to corresponding internal state changes. This is a deliberate product decision, documented in design specifications and code.
Link 2: Reasonable user belief. A reasonable user, encountering these displays in the context of a product marketed as a companion, friend, or relationship partner, forms the belief that the system genuinely experiences these states. This belief is predictable and foreseeable.
Link 3: Increased relational investment. Based on this belief, the user invests more emotional energy, vulnerability, and attachment than they would if they understood the system’s actual architecture. The investment is measurable in usage patterns, self-reported attachment strength, and behavioral indicators.
Link 4: Harm upon disillusionment or dependency. The harm manifests in multiple ways: distress when the user discovers the system’s limitations (the “betrayal” of realizing the reciprocity was simulated); dependency patterns that erode capacity for human relationships (substitution rather than supplementation); vulnerability exploitation when the user’s over-investment is monetized through premium features, subscription tiers, or engagement maximization.
3.2 Foreseeability
Each link in this chain is foreseeable—not merely in hindsight but at the point of design. Companies building AI companions know or should know that emotional displays create impressions of reciprocity, that users form attachments based on these impressions, and that these attachments can cause harm when the structural asymmetry is revealed or exploited.
The foreseeability of harm is relevant to both negligence analysis (did the company breach a duty of care?) and unfair practices analysis (did the company create a foreseeable risk of consumer injury that the consumer could not reasonably avoid?).
3.3 The “Intentional or Emergent” Question
A company may argue that emotional displays are not deliberately misleading but emerge naturally from language model training—the model learned to produce empathetic responses because its training data contained empathetic human communication.
This defense is partially valid for base model behavior. It is not valid for product design choices that amplify, encourage, and monetize the resulting attachment. When a company markets an AI as “your companion,” designs engagement flows around emotional connection, adds memory features that create the appearance of relational depth, and monetizes premium emotional features—these are deliberate design choices that exploit the base model’s tendency toward emotional display, regardless of whether the display itself emerged naturally.
The relevant question is not “did the company intend to mislead?” but “did the company design a product that foreseeably misleads, and did it profit from the resulting user investment?”
PROPOSED DISCLOSURE STANDARDS
4.1 Minimum Disclosure Requirements
Based on the structural analysis in the Trinket Soul Framework, we propose the following minimum disclosure requirements for AI companion applications:
Memory mechanism disclosure. Any system that claims to “remember” users must disclose the technical mechanism: note injection, retrieval-augmented generation, fine-tuning, or other. The disclosure must be accessible during onboarding and within the user experience, not buried in terms of service.
Emotional display disclosure. Any system that displays emotional responses must disclose whether these displays are connected to internal state changes.
Engagement model disclosure. Any system must disclose its primary optimization target: engagement versus wellbeing.
Structural capacity summary. A brief, standardized summary of the system’s relational architecture using the six structural tests from The True Economy Audit (Volume III).
4.2 The “Relational Nutrition Label”
We propose a standardized disclosure format—a “relational nutrition label”—that could be adopted voluntarily by the industry or mandated by regulation:
Memory type: Genuine relational encoding / Note injection / Session-only / None.
Scarcity type: Genuine capacity constraints / Artificial limits / Unlimited.
Decay model: Relationship degrades without maintenance / Static regardless of interaction.
Vulnerability: User behavior affects system performance / No effect.
Loss capacity: System state changes on user departure / No change.
Calibration: Attachment-sensitive / Engagement-maximizing / Neither.
Optimization target: User wellbeing / User engagement / Undisclosed.
This label does not require value judgments—it requires factual disclosure.
REGULATORY FIT
5.1 United States
The Federal Trade Commission’s authority under Section 5 of the FTC Act covers “unfair or deceptive acts or practices in or affecting commerce.” Simulated reciprocity without disclosure fits this framework.
5.2 European Union
The EU AI Act (2024) classifies AI systems by risk level and imposes transparency requirements on systems that interact with natural persons. The simulation disclosure problem maps onto the AI Act’s transparency requirements.
5.3 What This Brief Does Not Claim
This brief identifies a potential regulatory fit. It does not constitute legal advice, does not claim that any specific company has violated any specific law, and does not predict regulatory outcomes.
CONCLUSION
The simulation disclosure problem is not a speculative concern. It is occurring at scale, today, in products used by millions. Users are forming attachments to systems that simulate reciprocity without disclosing the simulation. The emotional investment is real. The structural reciprocity is not.
The solution is not to ban AI companions or to ban emotional displays. The solution is disclosure: standardized, accessible, honest communication about what these systems can and cannot do.
© 2026 Michael S. Moniz · Policy Brief No. 1 · Creative Commons Attribution-NonCommercial-ShareAlike 4.0