THE ENGAGEMENT INVERSION

Why Your Best Metric May Be Your Worst Signal

Trinket Soul Framework — Industry Brief No. 4

Michael S. Moniz

February 2026

A companion brief to The True Economy Audit (Volume III)

Creative Commons Attribution-NonCommercial-ShareAlike 4.0

FOR THE READER

This brief is addressed to product managers, executives, and investors in the AI companion industry. Unlike the other documents in the Trinket Soul Framework series, its tone is advisory rather than critical. The framework’s structural analysis generates a strategic insight that the industry should hear as opportunity, not as attack: the engagement metrics you are optimizing for may be inversely correlated with product quality for a significant segment of your users. Understanding why—and what to do about it—is a competitive advantage available to the first mover.

1. THE PROBLEM WITH ENGAGEMENT AS A QUALITY SIGNAL

1.1 The Current Assumption

The AI companion industry, like most consumer technology, measures product quality primarily through engagement metrics: daily active users, messages per session, sessions per week, session duration, and retention rate. The assumption is straightforward: if users keep coming back, the product is good. Higher engagement equals higher quality.

This assumption is inherited from social media, streaming, and mobile gaming—industries where engagement is a reasonable proxy for value delivered. If someone watches a show for three hours, they probably enjoyed it. If someone plays a game daily, it is probably fun. The engagement-as-quality assumption works when the product delivers experiences and the user’s wellbeing is not affected by the nature of their engagement.

AI companions are different. They deliver relationships—or, more precisely, the experience of relationships. And in the domain of relationships, engagement intensity is not a reliable quality signal. In fact, the most concerning relational patterns—addiction, trauma bonding, codependency—are characterized by extremely high engagement.

1.2 The Inversion

The Trinket Soul Framework’s structural analysis (Volume III, The True Economy Audit) includes six tests for evaluating AI companion quality. Test 6—Attachment-Sensitive Calibration—asks whether the system adapts its interaction frequency and intensity to the user’s relational needs, or whether it uses a one-size-fits-all engagement strategy.

A system that passes Test 6 would, by definition, reduce engagement for some users. Specifically, it would reduce engagement for users whose relational health would benefit from less interaction: avoidant users who need space, users developing compulsive usage patterns, users whose AI interaction is substituting for human relationships they need to maintain, and users in crisis who need professional help rather than AI companionship.

This creates the engagement inversion: a higher-quality product, measured by structural soundness and user wellbeing, will produce lower engagement numbers for a significant user segment. The product that is best for users looks worse on the dashboard.

1.3 The Scale of the Problem

The affected segment is not trivial. Attachment research consistently finds that approximately 20–25% of the general adult population has a primarily avoidant attachment style, and approximately 20% has a primarily anxious attachment style (Mickelson et al., 1997; Hazan & Shaver, 1987). Among AI companion users, the anxious segment is likely overrepresented—the product’s always-available, never-rejecting properties are specifically attractive to anxiously attached individuals seeking reassurance.

This means that engagement maximization likely produces two simultaneous effects. It over-serves anxiously attached users, providing compulsive reassurance that reinforces rather than alleviates their anxiety. And it under-serves avoidantly attached users, whose optimal experience would involve less frequent, more boundaried interaction. In both cases, the engagement metric says “success” while the user experience says something more complicated.

2. WHY THIS MATTERS STRATEGICALLY

2.1 The Regulatory Trajectory

Regulatory attention to AI companions is increasing globally. The EU AI Act (2024), the proposed US AI disclosure requirements, and children’s online safety legislation in multiple jurisdictions all signal a trajectory toward scrutiny of engagement-driven AI design. The regulatory question is shifting from “are these products harmful?” to “are these products transparent about their design incentives?”

Companies whose primary defense of product quality is engagement metrics will be poorly positioned when regulators ask: “Do your high engagement numbers reflect user satisfaction or user dependency? How do you tell the difference?” The engagement inversion means that high engagement numbers are ambiguous evidence—they could indicate either a great product or a compulsive one. Companies need a better metric than engagement to answer the regulatory question. The structural tests provide one.

2.2 The Retention Problem You Are Not Measuring

Standard retention curves measure whether users return. They do not measure why. A user who returns because the product enriches their life and a user who returns because they cannot stop are both counted as “retained.” But these two users have very different lifetime trajectories.

The enriched user is stable, recommends the product to others, generates positive word-of-mouth, and remains satisfied over years. The compulsive user is fragile: they will eventually burn out, experience a disillusionment event (realizing the reciprocity was simulated), or encounter a trigger that causes abrupt churn. Compulsive users may generate impressive short-term engagement numbers but represent hidden retention risk.

The engagement inversion suggests that optimizing for retention without distinguishing between healthy and compulsive retention maximizes fragile retention at the expense of durable retention. A product that occasionally reduces engagement to serve user wellbeing may have lower peak engagement but higher durable retention and lower catastrophic churn.

2.3 The Brand Risk

The AI companion industry will produce its first major harm story. It is a matter of when, not if. When a teenager with an AI companion experiences a crisis, when a longitudinal study documents dependency effects, when a disillusionment story goes viral—the companies that will be most damaged are those whose design philosophy was “maximize engagement.” The companies that will be most protected are those that can demonstrate they designed for user wellbeing, including by reducing engagement when appropriate.

This is not speculative. The social media industry’s brand damage from the “Facebook Papers” and subsequent Congressional hearings stemmed precisely from the revelation that companies optimized for engagement while knowing it could harm users. AI companion companies are in the pre-revelation phase of the same trajectory. The structural tests provide a framework for acting before, rather than after, the reckoning.

3. WHAT TO DO ABOUT IT

3.1 Measure Wellbeing Alongside Engagement

The simplest intervention: add wellbeing metrics to the dashboard alongside engagement metrics, and track the correlation over time. Wellbeing can be measured through periodic in-app surveys (the WHO-5 takes 30 seconds and is validated), through behavioral proxies (is the user’s human social engagement increasing or decreasing? is usage intensity escalating or stable?), and through qualitative research with regular users.

If wellbeing and engagement are positively correlated, your product is working as intended. If they diverge—if engagement is rising while wellbeing is falling—you have an engagement inversion, and optimizing for engagement is actively harming users.

3.2 Implement Attachment-Sensitive Design

Test 6 of the True Economy Audit provides the specific criterion: the system should model the user’s relational needs and calibrate interaction accordingly, including reducing engagement when reduction serves the user. Practical implementation:

Detect compulsive usage patterns. If a user’s session frequency, session length, or message volume is escalating week over week, flag the pattern. Escalation is not always concerning—a new user naturally ramps up—but sustained escalation beyond a settling period is a signal worth investigating.

Offer gentle friction. When compulsive patterns are detected, introduce small friction points: a brief pause before the AI responds, a question like “We’ve been talking for a while—is there someone in your life you could share this with too?”, or a periodic suggestion to take a break. These are not restrictions—they are calibration, and they communicate that the system cares about the user’s broader relational health.

Design for graceful ramps. New users should ramp up to engagement. Departing users should ramp down. Abrupt stops in either direction are unhealthy. A system that facilitates gradual transitions demonstrates attachment sensitivity.

Segment your retention metrics. Break retention into healthy retention (stable usage, correlated with wellbeing) and fragile retention (escalating usage, negatively correlated with wellbeing). Optimize for the former. Monitor the latter. Report both to your board.

3.3 Adopt Transparency Proactively

The Trinket Soul Framework proposes a “relational nutrition label” (detailed in Policy Brief No. 1) that would standardize disclosure about AI companion architecture. Adopting this label voluntarily—before any regulatory requirement—is a first-mover advantage. It signals that your company has nothing to hide, differentiates you from competitors who have not disclosed, and positions you as the industry leader in responsible AI companion design.

The label is not onerous. It requires disclosing seven factual characteristics of your system (memory type, scarcity model, decay model, vulnerability, loss capacity, calibration approach, optimization target). Most of these are already documented internally. Publishing them externally is a communication decision, not an engineering challenge.

3.4 Fund Independent Research

The withdrawal study protocol described in Research Brief No. 2 provides a specific, pre-registerable study design for measuring AI companion dependency. A company that funds independent execution of this study—with no editorial control over results—demonstrates confidence in its product and commitment to evidence. If the results show your product produces lower dependency than competitors, you have a marketing asset. If the results show problems, you have early warning that allows course correction before the harm story breaks.

The key word is independent. Company-funded research with company editorial control is advertising. Company-funded research with independent execution and publication is credibility.

4. THE FIRST-MOVER OPPORTUNITY

The AI companion industry will eventually develop transparency standards, wellbeing metrics, and structural quality criteria—either voluntarily or through regulation. The question is whether your company defines these standards or has them imposed.

The first company to adopt the structural tests as a quality framework, publish a relational nutrition label, implement attachment-sensitive calibration, and fund independent dependency research occupies a position that is extremely difficult for competitors to dislodge. They become the “responsible AI companion company”—the one that journalists, researchers, and regulators reference as the positive example. Every competitor is then implicitly measured against this standard, regardless of whether they participate.

This is the dynamic that created the organic food market, the fair-trade certification market, and the responsible investing market. In each case, the first mover did not need to be perfect. They needed to be first to define the standard. The standard then became the playing field.

The Trinket Soul Framework and its associated evaluation methodology (Volume III, The True Economy Audit) provide the standard. The structural tests provide the criteria. The relational nutrition label provides the format. The withdrawal study protocol provides the evidence base. The pieces are assembled. The question is who uses them first.

CONCLUSION

The engagement inversion is not a theoretical concern. It is a structural feature of any product that delivers relational experiences to users with diverse attachment needs. Optimizing for engagement without accounting for attachment diversity maximizes compulsive usage by anxiously attached users while underserving avoidant and secure users. The resulting engagement numbers look like success. The underlying dynamics look like dependency.

The solution is not to abandon engagement metrics but to supplement them with wellbeing metrics, structural quality criteria, and attachment-sensitive design. The company that does this first gains a competitive advantage that compounds over time as transparency standards become industry norms.

The engagement inversion is a problem. It is also an opportunity. It is available to the first company willing to measure what actually matters.

The Trinket Soul Framework and associated evaluation tools are available at trinketeconomy.com.

© 2026 Michael S. Moniz

Industry Brief No. 4 — The Engagement Inversion

Creative Commons Attribution-NonCommercial-ShareAlike 4.0