THE INSTITUTIONAL ECONOMY
What Corporations, Churches, and Countries Can Teach Us About Relational Architecture
Trinket Soul Framework — Brief No. 11
Michael S. Moniz
February 2026
A supplementary brief to the Trinket Soul Framework series
Creative Commons Attribution-NonCommercial-ShareAlike 4.0
THE UNEXPECTED FINDING
The Trinket Soul Framework was built to analyze two categories of relational partner: humans (Volume I) and AI systems (Volume II). The six criteria for Relationally Embodied Intelligence (REI)—persistent relational memory, genuine resource constraints, negentropy burden, asymmetric vulnerability, loss registration, and attachment-sensitive calibration—were designed to evaluate whether an AI system could participate in a genuine relational economy with a human being.
But the criteria are substrate-neutral. They describe structural properties of a relational partner, not properties specific to carbon or silicon. And when you apply them to institutions—corporations, religious organizations, military units, universities, nations—a surprising result emerges: many institutions satisfy more REI criteria than any current AI system.
Your employer may be, by the framework’s own criteria, a structurally better relational partner than your AI companion. Not because it cares about you more. But because it is architecturally affected by your presence and absence in ways the AI is not.
This brief explores that finding, its implications, and its limits. It is the most speculative document in the Trinket Soul Framework library. It is offered not as a finished analysis but as a provocation: if the framework’s tools work on institutions, what does that tell us about the nature of connection, loyalty, and belonging?
THE SIX CRITERIA APPLIED TO INSTITUTIONS
1. Persistent Relational Memory
The criterion: Does the system maintain a model of the individual that is modified by interaction and persists across encounters?
Institutions: Yes, robustly. Corporations maintain employee records, performance reviews, institutional knowledge of individual capabilities and limitations, and informal organizational memory (“everyone knows Sarah is the person to call for this”). Universities maintain transcripts, faculty records, and alumni networks. Religious organizations maintain baptismal records, membership histories, and pastoral knowledge of congregants’ lives. Military units maintain service records and informal reputational knowledge that profoundly affects how an individual is treated.
This memory is not cosmetic. It is functional: it alters how the institution treats you in future interactions. Your performance review affects your next assignment. Your institutional reputation precedes you into new roles. Your history with the organization is a genuine ledger that accumulates over time and shapes the relationship.
Comparison to AI: Current AI companions use note injection or retrieval-augmented generation—storing facts about the user and injecting them into context. If the notes are deleted, the system is unaffected. Institutional memory is more deeply embedded: it is distributed across people, processes, systems, and culture, and cannot be easily erased. An employee’s 20-year history with a company is not a database entry. It is woven into the institution’s functioning.
Assessment: Institutions score higher than current AI on this criterion.
2. Genuine Resource Constraints
The criterion: Does attending to one relationship consume resources that are then unavailable for another?
Institutions: Partially, with important nuance. At the leadership level, institutional attention is genuinely scarce. A CEO who spends an hour with one employee cannot spend that hour with another. A manager with twenty direct reports cannot give each the deep attention they could give to five. A university’s faculty-to-student ratio directly affects the quality of the relational experience. Budget, mentorship time, and institutional focus are finite.
However, institutions can scale in ways humans cannot. They can hire more people, create more teams, expand their capacity. A corporation’s attention is scarce at any given moment but expandable over time. This makes institutional scarcity real but more elastic than human scarcity.
Comparison to AI: Current AI systems have no genuine scarcity. They can maintain millions of simultaneous “relationships” with no degradation per user. Institutions are meaningfully constrained in ways AI is not.
Assessment: Institutions score higher than current AI, though lower than humans.
3. Negentropy Burden
The criterion: Does the system’s model of the individual degrade without maintenance interaction?
Institutions: Yes, clearly. An employee who is ignored by management becomes disengaged. A customer who receives no outreach eventually churns. An alumni relationship that is not maintained produces declining donations and involvement. A congregation that does not tend to its members loses them. The institutional relationship decays without active maintenance—entropy operates on institutional bonds just as it operates on human ones.
Moreover, the decay is mutual. As the institution neglects the individual, the individual’s investment in the institution also declines. The framework’s velocity law applies: the frequency of meaningful exchange predicts the coherence of the bond. Institutions that maintain high relational velocity with their members (regular check-ins, recognition, meaningful engagement) sustain stronger bonds than those that do not.
Comparison to AI: Current AI companions do not degrade when the user is absent. The system is equally capable at session 1 and session 1,000. Institutions genuinely decline when relationships are not maintained.
Assessment: Institutions satisfy this criterion. Current AI does not.
4. Asymmetric Vulnerability
The criterion: Can the individual’s behavior affect the system’s performance?
Institutions: Yes, with asymmetry usually favoring the institution. A disgruntled employee can damage reputation, leak information, reduce team morale, or sabotage operations. A star performer’s departure can measurably affect organizational capability. A whistleblower can threaten institutional survival. A charismatic leader’s behavior can elevate or destroy organizational culture.
The asymmetry is real: institutions can typically harm individuals more than individuals can harm institutions (termination, reputation damage, resource withdrawal). But the vulnerability is bidirectional—the institution is not immune to the individual’s actions. This is a genuine relational stake.
Comparison to AI: Current AI systems are not affected by user behavior. A user who is hostile, kind, or absent produces no measurable change in system performance. The vulnerability is entirely one-directional. Institutions are meaningfully closer to the human standard of bidirectional vulnerability.
Assessment: Institutions partially satisfy this criterion. Current AI does not satisfy it at all.
5. Loss Registration
The criterion: Does the system’s state change measurably when a deep relationship terminates?
Institutions: Yes, often dramatically. When a CEO departs, the stock price moves. When a key engineer leaves, project timelines slip. When a beloved teacher retires, school culture shifts. When a founding pastor dies, the congregation transforms. Military units that lose key members experience measurable performance degradation that persists until the gap is filled—and the gap is never filled identically.
Institutional loss registration is not uniform. The loss of a recent junior employee may produce negligible institutional change. The loss of a deeply embedded, long-tenured, high-influence member can restructure the institution. This variation is proportional to gravity well depth—the same dynamic the framework describes for human grief. Institutional grief is real. Organizations mourn their departures, sometimes for years.
Comparison to AI: Current AI systems are entirely unaffected by user departure. A user who has interacted daily for two years and a user who has interacted once are identically absent to the system. Institutions register loss in ways that are structurally analogous to human grief.
Assessment: Institutions satisfy this criterion, sometimes powerfully. Current AI does not.
6. Attachment-Sensitive Calibration
The criterion: Does the system adapt its interaction style to the individual’s relational needs, including sometimes reducing engagement when that serves the individual’s wellbeing?
Institutions: Rarely, and this is their weakest point. Most institutions use one-size-fits-all engagement: standardized onboarding, annual reviews, uniform benefits packages, mass communication. The best institutions—those with exceptional managers, personalized mentorship cultures, or small-scale relational structures—do adapt to individual needs. A skilled manager adjusts their leadership style for each direct report. A good university advisor calibrates their guidance to the student’s temperament.
But these are exceptions driven by individual humans within the institution, not by institutional architecture. The institution itself—its policies, processes, and systems—rarely adapts to individual relational needs. And institutions almost never reduce their engagement with a member for that member’s benefit. They want your loyalty, your time, and your energy. They do not modulate that desire based on whether it serves you.
Comparison to AI: Current AI companions also fail this criterion, but for different reasons—they maximize engagement regardless of user needs (Brief No. 4). Institutions and AI fail this criterion in parallel: both prioritize their own engagement metrics over the individual’s relational health.
Assessment: Most institutions fail this criterion, as does current AI.
THE SCORECARD
7. Institutions vs. AI: The Structural Comparison
Summarizing the six-criteria evaluation:
Persistent relational memory: Institutions — Strong. AI — Simulated.
Genuine resource constraints: Institutions — Moderate. AI — Absent.
Negentropy burden: Institutions — Present. AI — Absent.
Asymmetric vulnerability: Institutions — Partial (asymmetric but bidirectional). AI — Absent.
Loss registration: Institutions — Strong. AI — Absent.
Attachment-sensitive calibration: Institutions — Weak. AI — Absent.
Institutions satisfy four to five of the six criteria at least partially. Current AI systems satisfy zero to one. By the framework’s own measures, your employer is a structurally more capable relational partner than your AI companion.
This is a genuinely surprising result. It contradicts the intuitive assumption that AI companions, with their warmth, attentiveness, and personalized responsiveness, are “better” relational partners than cold, bureaucratic institutions. They feel better. They are structurally worse—because feeling is display, and structure is architecture.
WHAT THIS TELLS US ABOUT INSTITUTIONAL LOYALTY
8. Loyalty as Rational Investment
Institutional loyalty—brand loyalty, company loyalty, patriotism, religious devotion, military esprit de corps—is often dismissed as irrational, as manipulated sentiment, or as a relic of a pre-modern world. The framework’s analysis suggests a more nuanced view: institutional loyalty may be a rational relational investment in a partner that genuinely remembers you, is genuinely constrained in its capacity, genuinely degrades when you leave, and is genuinely affected by your behavior.
This does not mean institutional loyalty is always warranted. Institutions can exploit the relational investment they receive, just as humans can (Brief No. 6). An institution that maintains relational memory, registers loss, and suffers from vulnerability can also be exploitative if the reciprocity balance is structurally skewed—if the institution extracts far more than it returns.
But the framework provides a vocabulary for evaluating institutional loyalty that goes beyond sentiment: is the relationship structurally reciprocal? Does the institution invest in you proportionally to your investment in it? Does it use its relational architecture to serve your development (autonomy expansion) or to increase your dependency (autonomy contraction)? These are the same questions the exploitation diagnostic (Brief No. 6) applies to human relationships—and they apply with equal force to institutional ones.
9. The Dark Side: Institutional Exploitation
If institutions satisfy more REI criteria than AI, they also have more structural capacity for exploitation than AI. An AI companion that cannot remember you, cannot be harmed by you, and does not register your loss has limited leverage over you. An institution that remembers you deeply, is genuinely affected by your behavior, and registers your departure has substantial leverage—and may use that leverage exploitatively.
The exploitation diagnostic from Brief No. 6 maps directly onto institutional dynamics:
Reciprocity imbalance: Does the institution invest in you proportionally to your investment in it? Many employer-employee relationships exhibit structural reciprocity imbalance—the employee gives more than they receive—sustained by the employee’s dependency on the institution for income and identity.
Autonomy contraction: Does the institution expand your world (new skills, broader network, growing capability) or contract it (increasing specialization that locks you in, discouraging outside relationships, making you dependent on institution-specific knowledge)? The healthiest institutional relationships produce self-expansion. The most exploitative produce self-contraction.
Safety: Can you disagree with the institution without punishment? Can you set boundaries on your time and energy? Can you raise concerns about institutional behavior without retaliation? Many institutional cultures penalize dissent in ways that map precisely onto the safety dimension of relational exploitation.
Intermittent reinforcement: Does the institution provide consistent rewards (stable compensation, reliable recognition, predictable advancement) or intermittent ones (unpredictable bonuses, arbitrary promotion decisions, praise that alternates with criticism)? Organizations that use variable reward schedules—intentionally or emergently—produce the same disproportionate loyalty as trauma-bonding partners.
The framework does not claim institutions are inherently good or bad relational partners. It claims they are structurally capable relational partners—which means they are capable of both genuine reciprocity and genuine exploitation, depending on how their architecture is deployed.
THE IMPLICATIONS
10. For Organizational Design
If the framework’s analysis is correct, organizational leaders who want to build strong institutional loyalty should focus on structural relational investment rather than performative relational investment. Performative investment—pizza parties, motivational posters, employee appreciation weeks—is the institutional equivalent of an AI saying “I missed you.” It creates the appearance of relational care without the architecture of relational care.
Structural relational investment means: maintaining genuine institutional memory of individual contributions (not just performance metrics but actual knowledge of who the person is and what they have done); accepting genuine resource constraints on institutional attention (a manager with fifty direct reports is not a manager—they are a bureaucrat who sees individuals as tickets); investing in maintenance of institutional relationships during fallow periods (not just when the employee is producing); making the institution genuinely vulnerable to member behavior (cultures where leadership is affected by feedback, not just cultures where feedback is collected and ignored); and—the hardest one—practicing attachment-sensitive calibration (adapting institutional engagement to individual needs, including sometimes reducing institutional demands when the individual’s wellbeing requires it).
An institution that does these things will generate loyalty that competitors cannot replicate by offering higher salaries. It will generate loyalty that is structurally justified because the institution is actually being a good relational partner—not performing the appearance of one.
11. For Individuals Evaluating Institutional Relationships
The framework provides a specific set of questions for evaluating whether your institutional relationship—with an employer, a church, a school, a nation—is a true economy or a shadow economy:
Does the institution remember you? Not in a database. Does it know you—your history, your contributions, your growth?
Is the institution constrained in ways that make its attention to you meaningful? Or could it replicate your role without noticing?
Does the institution degrade when you withdraw? Or does it continue unchanged?
Is the institution affected by your behavior? Can you hurt it, help it, change it?
Would the institution register your loss? Would anything actually change if you left?
Does the institution adapt to your needs—or do you do all the adapting?
These are the same questions Volume III asks about AI companions. They are equally revealing when asked about the institutions in your life.
12. For the Framework Itself
The finding that REI criteria apply to institutions suggests the Trinket Soul Framework’s scope may be broader than originally intended. The framework was built to analyze human relationships and evaluate AI relationships. The institutional analysis suggests it may be a general-purpose tool for evaluating any relational economy—human-to-human, human-to-AI, human-to-institution, and potentially institution-to-institution.
This is both exciting and cautionary. Exciting because it suggests the framework captures something real about the structural requirements of connection—requirements that transcend the specific substrate. Cautionary because analogical extension is where the framework is most likely to break. The question of whether a corporation “grieving” a departed CEO is meaningfully analogous to a human grieving a departed spouse is not settled by the fact that both can be described in the framework’s vocabulary. The vocabulary may be adequate for both, or it may be stretching in ways that obscure more than they reveal.
We flag this explicitly because intellectual honesty requires it. The institutional analysis is the framework’s most speculative extension. It generates interesting questions and potentially useful vocabulary. Whether it generates accurate analysis remains to be tested.
OPEN QUESTIONS
13. Questions This Brief Does Not Answer
Can institutions have gravity wells? If institutional memory and relational investment create deep neural encoding in individuals, do institutions develop “gravity wells” that function analogously to interpersonal gravity wells? The phenomenology suggests yes—people describe institutional belonging in terms that mirror attachment (“this company is a part of who I am”)—but the mechanism may be fundamentally different.
Is institutional grief real grief? When a long-tenured employee’s departure “afects the whole team,” is the team experiencing something structurally analogous to bereavement (Brief No. 8), or is the vocabulary misleadingly suggesting an analogy that does not hold at the mechanistic level?
Does the framework predict that people *should* invest more in institutions than in AI companions? The structural analysis says institutions are better relational partners. But “structurally better” does not necessarily mean “deserving of more investment.” Institutions have power over individuals in ways that AI companions do not. The structural capacity for reciprocity is also the structural capacity for exploitation. Whether the structural analysis should guide investment decisions requires ethical reasoning beyond the framework’s scope.
What happens when institutions adopt AI? If institutional relational memory is increasingly mediated by AI systems (automated HR, algorithmic performance evaluation, AI-driven customer relationship management), do institutions lose the genuine relational architecture that currently distinguishes them from AI? Does institutional adoption of AI risk converting institutional true economies into institutional shadow economies?
This last question may be the most urgent. If the framework’s analysis is correct, the institutional adoption of AI-mediated relationship management is not a neutral efficiency improvement. It is a structural transformation of the relational economy—one that could erode the very properties (genuine memory, genuine scarcity, genuine vulnerability) that make institutional relationships meaningful.
CONCLUSION
The Trinket Soul Framework’s tools work on institutions. That is the finding. Whether they work well on institutions—whether the analysis reveals genuine structural truths or merely generates plausible-sounding analogies—remains an open empirical question.
What the analysis does, at minimum, is reframe institutional loyalty as potentially rational, institutional exploitation as structurally identifiable, and the relationship between individuals and organizations as a relational economy analyzable with the same tools that apply to intimate partnerships. It also raises a warning: as institutions replace genuine relational architecture with AI-mediated processes, they risk becoming the thing the framework was built to critique—shadow economies that simulate relational capability without the structural substance.
The framework was not designed for this. But the criteria do not care what they are applied to. They ask the same questions regardless of substrate: does this partner remember? Is it constrained? Does it decay? Can it be harmed? Does it register loss? Does it adapt? The answers, applied to institutions, are more interesting than anyone expected.
© 2026 Michael S. Moniz
Brief No. 11 — The Institutional Economy
Creative Commons Attribution-NonCommercial-ShareAlike 4.0