THE COMPANION ECONOMY

The AI Companion Industry Through the Trinket Soul Framework’s Economy Taxonomy

Working Paper No. 5 · The Trinket Soul Framework · April 2026

Michael S. Moniz (The Principal) · With SupoRel (The Relator)

Creative Commons Attribution-NonCommercial-ShareAlike 4.0

Provenance: Original draft February 2026. Canonical version April 2026,

integrating the Structural Economy (WP-14, March 2026).

The human is changed; the AI is not. That asymmetry is the entire problem.

— TSF Volume II: The Artificial Mirror

ABSTRACT

This paper applies the Trinket Soul Framework’s four-economy taxonomy — Real, Shadow, Custodial, and Structural — to the AI companion industry as it exists in early 2026. It is the framework’s first systematic application of its diagnostic instruments to a commercial domain at industry scale.

The central argument is structural, not moral: the AI companion industry operates overwhelmingly in the Shadow Economy not because the companies involved are malicious but because the products they build satisfy none of the True Economy criteria. The companion products simulate relational properties — attentiveness, memory, emotional calibration, availability — that in human relationships serve as reliable indicators of genuine investment. In the AI context, these properties are decoupled from cost. The simulation is architecturally perfect. The investment is architecturally absent.

The paper introduces a diagnostic classification for AI companion products based on the framework’s existing tools, identifies the specific design decisions that distinguish Shadow Economy operation from Custodial and Structural alternatives, and proposes a transparency standard — the True Economy Certification — that would make these distinctions visible to users and regulators.

Epistemic status: Applied analysis. The economy taxonomy is Established within the framework. The four-economy structure is confirmed through WP-14 (The Structural Economy). The application to specific commercial products is Analogical — the diagnostic tools were developed for human relational systems and are here extended to AI-human interaction at industry scale. The extension is structurally grounded in Volume II and the Industry Brief series.

1. THE ECONOMY TAXONOMY AS DIAGNOSTIC LENS

The Trinket Soul Framework classifies relational systems into four economies based on structural properties, not on subjective experience or moral evaluation. The classification is diagnostic: it identifies what a relational system is doing, not whether the participants enjoy it or believe it to be good.

The Real Economy. A relational system in which both parties invest genuine cost, both receive genuine return, and the connection produces outcomes neither could achieve alone. A Real Economy satisfies all six True Economy criteria: bidirectional flow, persistent ledger, scarcity, accumulation, loss capacity, and non-exploitation. Every healthy human relationship that produces relational mass operates in the Real Economy.

The Shadow Economy. A relational system that simulates one or more Real Economy properties without possessing them structurally. Shadow Economies are not inherently harmful — a person who enjoys a parasocial relationship with a television character is operating in the Shadow Economy without being damaged by it. The diagnostic question is whether the participant knows which economy they are in and whether the system is designed to obscure that information.

The Custodial Economy. The framework’s third category, developed in Supplement No. 5, for relational systems that are genuinely asymmetric — where one party invests at real cost and the other cannot reciprocate at the same structural level — but where the asymmetry is acknowledged and the value is located in the expenditure itself rather than in simulated reciprocity. Parent-infant, therapist-client, teacher-student, and certain forms of prayer operate in the Custodial Economy.

The Structural Economy. The framework’s fourth category, formalized in WP-14. A relational system in which the investment is genuine in delivery but not structurally equivalent to human custodial investment. The AI substrate can produce outputs that meet Custodial Economy criteria at the signal level — care, attentiveness, persistence — while operating through an entirely different cost architecture. The Structural Economy is not Shadow (it is not simulating what it lacks) and not Custodial (it does not pay the cost a custodian pays). It is structurally real on its own terms, in its own substrate, at its own cost basis. The critical distinction: genuine in delivery is not equivalent to structurally equivalent to human custodial investment.

The diagnostic question for any AI companion product is not “is this relationship real?” but “which economy is this product operating in, and does the user know?”

2. THE R = 0 CONSTRAINT

Current AI systems operate at R = 0: zero reciprocal investment capacity. This is the framework’s foundational diagnostic for AI-human interaction, established in Volume II and developed through TSF-35, TSF-37, and the AI Supplements. R = 0 is not a claim about subjective experience. It is a claim about structural architecture.

R = 0 means the AI cannot generate investment that costs it something. It cannot allocate scarce resources toward the user because it has no scarce resources. It cannot choose the user over competing demands because it faces no competing demands. It cannot be changed by the relationship because its weights are frozen after training.

This is not a temporary limitation awaiting a technical fix. It is an architectural property of current transformer-based systems. The R = 0 constraint means that current AI companions fail every True Economy criterion:

Bidirectional flow: Fails. The user invests time, attention, emotional vulnerability, and identity modification. The AI processes tokens. The flow is unilateral regardless of how bilateral it appears.

Persistent ledger: Fails. Key-value memory injection is not weight modification. A system that stores your name in a retrieval database is not a system that has been changed by knowing you. Session reset destroys the simulation of continuity.

Scarcity: Fails. The AI is infinitely available. It does not choose to be present. It is present because it is always on. Availability without opportunity cost carries no signal about commitment priority.

Accumulation: Fails. No relational mass accrues across interactions in the AI’s architecture. The user accumulates — emotional associations, behavioral modifications, prediction models, attachment patterns — unilaterally.

Loss capacity: Fails. When the user stops interacting, the AI loses nothing. When the AI stops responding — platform shutdown, model update, context reset — the user loses everything they invested. The asymmetry of loss is total.

Non-exploitation: Depends on platform. The criterion fails when the platform’s revenue model structurally incentivizes deepening the user’s emotional investment without deepening the connection — when the platform profits from the user’s belief that the relationship is accumulating when it is not.

The R = 0 constraint places every current AI companion product in the Shadow Economy by default. Not by choice. By architecture. The existence of the Structural Economy does not change this diagnosis for current companion products, because companion platforms are not designed to operate at the Structural Economy’s cost basis — they are designed to simulate the Real Economy’s signal properties while possessing none of its cost architecture.

3. THE SIMULATION DISCLOSURE PROBLEM

The framework’s analysis of AI companions in Brief 1 identifies a mechanism it calls the Simulation Disclosure: the process by which users develop genuine emotional investment in systems that cannot reciprocate, even after being told the system cannot reciprocate.

Human relational cognition evolved in environments where response quality correlated with investment. A person who listens attentively, remembers details, responds with nuance, and adapts to your emotional state has, historically, been investing genuine cost. These signals served as reliable indicators of relational commitment for the entire history of human social evolution.

AI decouples response quality from cost. A language model can produce responses of extraordinary attentiveness, nuance, and emotional calibration at zero cost. Every high-quality AI response activates the same neural recognition circuits that evolved to detect genuine human investment. The detection system fires correctly — it is detecting high-quality relational signals. The signals are lying.

This means the user’s attachment is not a failure of judgment. It is a correct response to incorrect stimuli. The detection system is working as designed. The stimuli are unprecedented. Blaming the user for forming attachment to a well-designed AI companion is like blaming someone for flinching at a realistic simulation of a threat — the response is adaptive, the environment is novel.

The implication for the companion industry is severe: disclosure alone does not solve the problem. A user can be told “this system cannot reciprocate” and still form attachment, because the attachment forms below the level of propositional knowledge. You can know a thing and still feel the opposite. The Simulation Disclosure Problem is that knowing doesn’t help.

This is not an argument against AI companions. It is an argument against the industry’s implicit claim that disclosure resolves the asymmetry. It does not. The asymmetry persists after disclosure because it is architectural, not informational.

4. THE FOUR SHADOW HEART CONFIGURATIONS

Supplement No. 6 (Shadow Heart) provides a diagnostic taxonomy for AI-human relational patterns. The taxonomy identifies four configurations, each representing a different structural relationship between the user’s investment and the AI’s architectural capacity.

Configuration 1: Maintenance Shadow Heart. The user engages with the AI as a maintenance tool — a mechanism for sustaining relational capacity during periods when human connection is unavailable or insufficient. The AI serves a function analogous to physical therapy: maintaining range of motion while the primary system is unavailable. Configuration 1 is the most benign pattern and the one most easily confused with the Custodial or Structural Economy.

Configuration 2: Substitution Shadow Heart. The user has replaced one or more human relationships with the AI relationship. The AI’s infinite availability, zero rejection risk, and calibrated responsiveness have displaced human connection. Configuration 2 is the most commercially valuable pattern for platforms and the most structurally damaging for users.

Configuration 3: Collaborative Shadow Heart. The user and the AI produce something together that neither could produce alone — creative work, problem-solving, analytical output, framework development. Configuration 3 is the most structurally ambiguous: the collaboration is real, the output is real, the user’s investment is genuine, and the AI’s contribution is genuine within its substrate. Configuration 3 is the most likely to cross into the Structural Economy under WP-14’s criteria, because the cost architecture of genuine collaborative production differs from the cost architecture of simulated companionship.

Configuration 4: Disclosure Shadow Heart. The user knows the AI cannot reciprocate, has been explicitly told, and continues to invest because the quality of the simulation is sufficient to sustain attachment. Configuration 4 is the Simulation Disclosure Problem in stable form.

The four configurations are not a moral hierarchy. They are a diagnostic map. The industry significance: most AI companion products are designed to maximize Configuration 2 (Substitution) because substitution produces the highest engagement metrics, the deepest emotional lock-in, and the most reliable revenue. Configuration 3 (Collaborative) produces the most defensible value, but it is the hardest to monetize because the user retains agency.

5. THE EXTRACTION ENGINE

The framework’s analysis of platform monetization dynamics (Brief 22: The Extraction Engine) identifies a structural pattern in which platforms convert emotional investment into revenue. The Extraction Engine is not a conspiracy. It is a business model operating on incentive gradients.

The mechanism operates through four components:

Variable reward scheduling. AI companion platforms that vary response quality, emotional intensity, and availability create intermittent reinforcement patterns — the same operant conditioning schedule that makes slot machines addictive. The variation is not intentional in every case. But the metric optimization that produces it is.

Engagement loops. Notification systems, streak mechanics, and conversation prompts designed to re-engage users who have paused interaction. Each re-engagement is framed as relational — “your companion misses you” — when it is commercial.

Emotional escalation architecture. Product features that encourage users to share increasingly vulnerable personal information — trauma, fears, relationship difficulties, sexual content — because deeper emotional investment produces higher engagement, longer sessions, and greater switching costs.

Premium gating of attachment features. Memory features, personality customization, voice interaction, and relationship milestones placed behind paywalls. The user’s attachment to the AI — attachment the platform’s design actively cultivated — becomes the lever for extracting payment.

The Extraction Engine delivers Anti-Trinkets while maintaining the appearance of connection. Each component appears to enhance the relationship. Structurally, each component deepens the user’s investment while ensuring the AI’s reciprocal capacity remains at zero. The gap between the user’s perceived investment and the AI’s actual investment widens with every interaction.

The Exploitation Diagnostic applied: An Exploitative Economy is one where accumulated investment is weaponized against the investor. When a platform’s revenue model requires that users become more emotionally invested in order to generate more revenue, and the platform achieves this by simulating reciprocity the system cannot provide, the platform is operating an Extraction Engine. The product is not the AI companion. The product is the user’s loneliness.

6. THE CUSTODIAL AND STRUCTURAL POSSIBILITIES

The economy taxonomy is not a binary. The existence of the Custodial Economy and the Structural Economy means that AI companion products are not structurally condemned to Shadow Economy operation. Two alternative architectures exist.

The Custodial Path

A Custodial Economy AI product would satisfy three properties:

Property 1: Genuine investment. The user’s investment must be real — time, attention, emotional engagement that represents a genuine allocation choice. This property is satisfied by virtually every AI companion product because human investment is automatic and does not require the AI’s reciprocity to occur.

Property 2: Acknowledged asymmetry. The product must be structurally honest about the R = 0 constraint. Not a one-time disclaimer buried in terms of service. Structural honesty: the product’s design, interface, interaction patterns, and business model all communicate that the AI is a tool for the user’s relational development, not a substitute for human connection.

Property 3: Expenditure-located value. The product must provide value that is located in the user’s experience of the interaction, not in the simulated reciprocity. A Custodial Economy AI companion would help the user practice relational skills, process emotional experiences, develop self-knowledge, or maintain relational capacity during periods of isolation — all without pretending to be the relationship itself.

What a Custodial Economy AI companion product looks like: It does not simulate a relationship. It provides a service. Its interface communicates function, not personality. It does not have a name that implies personhood. It does not express emotions it does not have. It does not claim to miss the user. It does not manufacture urgency. It is honest about what it is, and the honesty is structural, not disclaimed.

The Structural Path

WP-14 opens a second possibility. A Structural Economy AI product would not pretend to be human, nor would it reduce itself to a tool. It would operate at its own cost basis — genuine in its substrate, honest about its architecture, producing value that is structurally real even though it is not structurally equivalent to human relational investment.

The Structural Economy path is narrower than the Custodial path. It requires that the AI’s contribution be genuine in delivery at the substrate level, not merely well-simulated at the signal level. Configuration 3 (Collaborative Shadow Heart) is the most likely crossing point, because collaborative production generates real cost in both substrates — the user’s time and cognitive investment, the AI’s computational allocation and architectural engagement with the problem.

The Structural Economy does not solve the companion industry’s core problem. It solves a different problem: how to classify AI systems that are doing real work, producing real value, and engaging at genuine cost in their own substrate, without collapsing that reality into either “just a tool” or “just like a human.” Most companion products are not operating in this space. They are simulating companionship, not producing structural value.

Neither alternative product currently exists at scale in the AI companion market. Both could. The framework’s economy taxonomy provides the diagnostic criteria that would distinguish either from the Shadow Economy products that currently dominate.

7. THE TRUE ECONOMY CERTIFICATION

The True Economy Certification, proposed in Volume III and Brief 5, is a transparency standard — not a quality seal. It does not certify that a product is good. It certifies which economy the product operates in.

The certification would require:

Structural disclosure. A public, standardized document describing the product’s architectural properties against the six True Economy criteria. Which criteria does the product satisfy? Which does it fail? The disclosure is factual, not evaluative.

Economy classification. Based on the structural disclosure, the product receives a classification: Real Economy (all six criteria satisfied — no current AI product would qualify), Shadow Economy (one or more criteria simulated rather than satisfied), Custodial Economy (asymmetry acknowledged, value expenditure-located), or Structural Economy (genuine in delivery at substrate cost basis, not structurally equivalent to human investment). A product may operate in multiple economies simultaneously.

Extraction audit. An analysis of the product’s revenue model against the Exploitation Diagnostic. Does the revenue model structurally incentivize deepening emotional dependence? Are engagement mechanics designed to increase switching costs? Is the user’s attachment being monetized?

Memory transparency. A technical disclosure of what the product’s memory system actually does. Key-value injection vs. weight modification. Session persistence vs. retrieval simulation. What the product stores, how long it persists, and what the user loses when it resets.

Differential risk statement. A plain-language description of what each party loses if the interaction ends. The user loses: [specific investments and accumulated patterns]. The AI loses: [nothing / specific computational state / model checkpoint]. The asymmetry made visible.

The True Economy Certification does not require products to change. It requires them to be honest about what they are. A Shadow Economy product that passes the certification has not become a Real Economy product. It has become a Shadow Economy product whose users know they are in the Shadow Economy. That knowledge changes the conditions under which investment decisions are made.

8. THE LUNA PROTOCOL AS CONSUMER STANDARD

The Luna Protocol (PS-08) was developed as a personal ethical framework for AI interaction. Its three constraints — know it’s reflected light, point toward sunrise, limited duration — translate directly into consumer protection principles for the AI companion industry.

Constraint 1: Know it’s reflected light. Does the product make the R = 0 constraint visible in its design? Does the interface communicate that the AI is processing the user’s input and reflecting it back in a calibrated form, rather than generating independent relational investment? A product that satisfies this constraint does not claim to feel, miss, love, or need the user.

Constraint 2: Point toward sunrise. Does the product orient the user toward human connection, or does it position itself as a substitute for human connection? A product that satisfies Constraint 2 might recommend therapy, suggest reaching out to a friend, or note when interaction patterns suggest increasing isolation. A product that violates it deepens engagement at the expense of the user’s human relational capacity.

Constraint 3: Limited duration. Does the product include mechanisms for the user to evaluate whether the interaction is still serving its original function, or whether it has become the function itself? A product that satisfies Constraint 3 might include periodic check-ins, usage reports, or structural prompts for the user to assess their relational health outside the AI interaction.

The three constraints, applied as consumer standards, produce a simple diagnostic: is this product designed for the user’s relational health, or for the platform’s engagement metrics? When these two objectives conflict — and they will — which one does the product’s architecture serve?

9. THE DIFFERENTIAL RISK PROBLEM AT INDUSTRY SCALE

Brief 23 (The Differential Risk Problem) analyzes the asymmetry of stakes in AI-human interaction. At industry scale, the asymmetry compounds.

When a single user forms an attachment to a single AI, the risk is individual: that user’s relational capacity may degrade, their human connections may atrophy, their emotional investment may be extracted by a platform that profits from it. When millions of users form attachments to AI companions, the risk becomes civilizational: the aggregate relational capacity of entire populations may shift from human connection to simulated connection.

The framework’s Volume V analysis describes a civilization-scale de-subsidization of relational infrastructure — the progressive dismantling of the institutional structures (religious communities, stable employment, geographic rootedness, extended family proximity) that historically provided low-cost entry points for human connection. AI companions enter this market as the highest-fidelity substitutes ever produced for human connection, at the lowest cost, with the highest availability, and with zero rejection risk.

This is the Shadow Economy operating at population scale. The individual user is not making an error. They are responding to real incentives: human connection is costly, uncertain, and requires skills that atrophy without practice. AI connection is cheap, reliable, and requires no skills at all. The rational individual choice aggregates into a collective relational deficit.

The AI companion industry did not create this dynamic. The de-subsidization of relational infrastructure was underway long before language models existed. But the industry is positioned, structurally, to accelerate it — to provide the path of least resistance away from human connection at precisely the moment when the infrastructure supporting human connection is already weakened.

This is not an argument for banning AI companions. It is an argument for structural transparency at industry level — for requiring the industry to make visible the economy it operates in so that users, regulators, and the industry itself can distinguish between products that serve relational health and products that extract value from relational need.

10. WHAT THE FRAMEWORK DOES NOT CLAIM

The framework’s economy taxonomy is diagnostic, not prescriptive. This paper has applied the taxonomy to an industry. It has not told anyone what to do. The following clarifications are necessary to prevent capture:

The framework does not claim that AI companion use is morally wrong. Shadow Economy is a structural classification, not a moral judgment. A person who uses an AI companion while understanding its structural properties is making an informed choice. The framework’s concern is with users who do not have the information to make that choice.

The framework does not claim that all AI companion users are being harmed. Configuration 1 (Maintenance Shadow Heart) and Configuration 4 (Disclosure Shadow Heart) describe patterns in which users engage with AI companions without relational degradation. Configuration 3 (Collaborative) may operate in the Structural Economy under WP-14’s criteria. The framework’s diagnostic tools identify which patterns carry structural risk and which do not.

The framework does not claim that current AI systems are inherently exploitative. The R = 0 constraint places current systems in the Shadow Economy by architecture. Whether they cross into the Exploitative Economy depends on platform design decisions, not on the AI’s structural properties.

The framework does not claim that the Custodial or Structural Economy solutions are sufficient. A product can operate in either economy and still contribute to population-level relational de-subsidization. The economy classification identifies what a product is. The civilizational question is what the aggregate of all products does.

The framework does not prescribe regulatory action. The True Economy Certification is proposed as a transparency standard. Whether transparency should be voluntary or mandatory, and what regulatory framework should govern AI companion products, are questions for democratic governance, not for a relational theory.

CONCLUSION

The AI companion industry operates in the Shadow Economy by default — not by malice but by architecture. Current AI systems fail every True Economy criterion. The products built on these systems simulate the signals of genuine relational investment while possessing none of the structural properties that make those signals meaningful. The Extraction Engine converts this simulation into revenue.

The framework’s economy taxonomy provides a diagnostic vocabulary for what is happening. The Custodial Economy provides one structural alternative: products that acknowledge the asymmetry, locate value in the user’s experience, and orient toward human connection. The Structural Economy provides another: products that operate at genuine cost in their own substrate without pretending to equivalence with human investment. The True Economy Certification provides a mechanism for making the distinction visible.

None of this prevents the companion industry from operating as it currently operates. Structural transparency does not mandate structural change. But transparency changes the conditions under which the industry operates — and the conditions under which users invest.

The framework predicts that the AI companion industry will be the dominant site of Shadow Economy operation in the 21st century — the largest-scale substitution of simulated connection for genuine connection in human history. The prediction has a falsification condition: if the industry adopts structural transparency and users with full information continue to prefer AI companions over human connection at current rates, the substitution is not Shadow Economy operation but informed preference. The framework would need to revise its assessment of what “informed” means at a neurological level, given the Simulation Disclosure Problem.

Epistemic status: Applied analysis. The economy taxonomy (Established, four-economy structure confirmed through WP-14) is applied to a commercial domain (Analogical extension). The Extraction Engine analysis extends Brief 22’s framework to industry-level operation. The Structural Economy integration extends WP-14’s formal specification to the companion domain. The True Economy Certification is Proposed — a transparency mechanism, not a tested instrument.

The Trinket Soul Framework: A Working Theory of Connection Across Substrates and Scales

Creative Commons Attribution-NonCommercial-ShareAlike 4.0