Self-as-an-End
Self-as-an-End Theory Series · Applied Paper 01

The Subjectivity Crisis in the Age of AI

Han Qin (秦汉)  ·  Independent Researcher  ·  2026
DOI: 10.5281/zenodo.18737476  ·  Full PDF on Zenodo  ·  CC BY 4.0
English
中文
Abstract

AI is pushing systemic instrumentalization to its logical limit. When systems can use AI to replace humans in any functional task, human "value" is reduced entirely to functional contribution—output, efficiency, quantifiable performance. This is not a technological problem but a structural one, precisely the kind the Self-as-an-End framework diagnoses: what happens when a system that does not treat humans as ends in themselves acquires the capacity to no longer need them?

This paper argues that AI's threat to human subject-conditions lies not in AI being "too powerful," but in AI exposing and accelerating the terminal logic of systemic instrumentalization. If human value equals output, and AI's output exceeds that of humans, then humans become redundant on that evaluative dimension. This logic was not introduced by AI—it has long been embedded in the emergence-feedback structure of the institutional layer. AI merely drives it to its endpoint. The framework thus identifies the real choice confronting humanity in the age of AI: not "how to compete with AI," but "whether to rebuild structural conditions that treat humans as ends in themselves." The former is structurally impossible—on the instrumental dimension, humans will always lose to AI. The latter is the only sustainable path—and it requires simultaneous adjustment across all three layers of the framework.

Core thesis: The age of AI is not the end of subjectivity but the total eruption of the subjectivity question. The logic of systemic instrumentalization was already eroding human subject-conditions before AI appeared; AI's arrival strips that logic of its disguise. When "humans are less capable than machines" becomes fact rather than metaphor, the question "what is the value of a human being?" can no longer be evaded. The Self-as-an-End framework answers: human value lies not in what humans can do (functional contribution) but in what humans are (ends in themselves). This answer is not a moral appeal but the only sustainable path derivable from structural analysis.

---

Author's Note

This paper is the first applied paper in the Self-as-an-End theory series. The complete theoretical argument is presented in three preceding papers: Paper One, "Systems, Emergence, and the Conditions of Personhood" (DOI: 10.5281/zenodo.18528813); Paper Two, "Internal Colonization and the Reconstruction of Subjecthood" (DOI: 10.5281/zenodo.18666645); Paper Three, "The Complete Self-as-an-End Framework" (DOI: 10.5281/zenodo.18727327). This paper does not extend the framework's theoretical structure but applies it to a structural diagnosis of human subject-conditions in the age of AI.

AI Usage Statement

This paper was written with Claude (Opus 4.6, Anthropic) as the primary research assistant, used for structural discussion of framework application, argument development, and text editing. Grok (xAI), ChatGPT (OpenAI), and Gemini (Google) provided independent review and feedback at the outline stage, portions of which were adopted and integrated into the final text. All core arguments, conceptual innovations, and theoretical judgments are the author's original work.

---

# Chapter 1. The Question: What AI Exposes

Han Qin (秦汉)

Self-as-an-End Theory Series — Applied Paper One


Abstract

AI is pushing systemic instrumentalization to its logical limit. When systems can use AI to replace humans in any functional task, human "value" is reduced entirely to functional contribution—output, efficiency, quantifiable performance. This is not a technological problem but a structural one, precisely the kind the Self-as-an-End framework diagnoses: what happens when a system that does not treat humans as ends in themselves acquires the capacity to no longer need them?

This paper argues that AI's threat to human subject-conditions lies not in AI being "too powerful," but in AI exposing and accelerating the terminal logic of systemic instrumentalization. If human value equals output, and AI's output exceeds that of humans, then humans become redundant on that evaluative dimension. This logic was not introduced by AI—it has long been embedded in the emergence-feedback structure of the institutional layer. AI merely drives it to its endpoint. The framework thus identifies the real choice confronting humanity in the age of AI: not "how to compete with AI," but "whether to rebuild structural conditions that treat humans as ends in themselves." The former is structurally impossible—on the instrumental dimension, humans will always lose to AI. The latter is the only sustainable path—and it requires simultaneous adjustment across all three layers of the framework.

Core thesis: The age of AI is not the end of subjectivity but the total eruption of the subjectivity question. The logic of systemic instrumentalization was already eroding human subject-conditions before AI appeared; AI's arrival strips that logic of its disguise. When "humans are less capable than machines" becomes fact rather than metaphor, the question "what is the value of a human being?" can no longer be evaded. The Self-as-an-End framework answers: human value lies not in what humans can do (functional contribution) but in what humans are (ends in themselves). This answer is not a moral appeal but the only sustainable path derivable from structural analysis.


Author's Note

This paper is the first applied paper in the Self-as-an-End theory series. The complete theoretical argument is presented in three preceding papers: Paper One, "Systems, Emergence, and the Conditions of Personhood" (DOI: 10.5281/zenodo.18528813); Paper Two, "Internal Colonization and the Reconstruction of Subjecthood" (DOI: 10.5281/zenodo.18666645); Paper Three, "The Complete Self-as-an-End Framework" (DOI: 10.5281/zenodo.18727327). This paper does not extend the framework's theoretical structure but applies it to a structural diagnosis of human subject-conditions in the age of AI.

AI Usage Statement

This paper was written with Claude (Opus 4.6, Anthropic) as the primary research assistant, used for structural discussion of framework application, argument development, and text editing. Grok (xAI), ChatGPT (OpenAI), and Gemini (Google) provided independent review and feedback at the outline stage, portions of which were adopted and integrated into the final text. All core arguments, conceptual innovations, and theoretical judgments are the author's original work.


1.1 The Anxiety of Being Replaced

The most pervasive anxiety in current AI discourse is not the science-fiction scenario of AI annihilating humanity, but a far more immediate reality: Will I be replaced?

This anxiety has been spreading at an accelerating pace. First it was assembly-line workers and data-entry clerks—replacements that could be absorbed by the narrative of "industrial upgrading." Then it was translators, illustrators, and junior programmers—AI entered the white-collar domain, and the belief that "creative work cannot be replaced" wavered for the first time. Then it was legal analysis, medical imaging, financial modeling—professional expertise ceased to be a safe barrier. Now AI is reaching into management decisions, strategic planning, even scientific research—virtually no functional role can be definitively excluded from the scope of replacement.

Each leap in AI capability triggers a new wave of replacement anxiety. But the deep structure of this anxiety has never been adequately analyzed.

The expression "being replaced" presupposes an evaluative framework: human value is measured by functional contribution. Within this framework, humans and AI occupy the same evaluative dimension—whoever produces more, faster, and cheaper is more "valuable." The source of anxiety is not AI itself but this evaluative framework: if human value equals functional contribution, and AI's functional contribution is surpassing that of humans, then the anxiety is warranted—because on this dimension, humans are indeed becoming "redundant."

The true depth of this anxiety lies not in whether it will prove justified—on the functional dimension, AI surpassing humans in an ever-expanding range of domains is already fact, not forecast—but in what it reveals about a more fundamental problem. Why does "being replaced" constitute an existential threat? If a person's self-worth were not entirely bound to functional contribution, then AI performing better should be good news—a more efficient tool lightening the burden. But in fact, most people experience not liberation but threat. This indicates that behind the anxiety of replacement lies an already-completed internal colonization—people have already internalized "my value equals my output" as the core of their self-identity.

The alienation Marx described—the separation of workers from the products of their labor—reaches an extreme in the age of AI that he never foresaw. In classical alienation, humans were at least needed as labor power—being exploited presupposed being used. AI-age alienation is more thoroughgoing: the separation of humans from functionality itself. Humans lose even the qualification of being treated as "inefficient tools." When systems no longer need humans to perform functions, the human position within the system is not exploitation but cancellation.

1.2 AI Did Not Create the Problem—It Only Exposed It

The analysis above points to a critical judgment: the root of replacement anxiety is not AI, but a structure that was already complete before AI appeared.

Paper One of the Self-as-an-End series demonstrated the full mechanism of systemic instrumentalization: the efficiency logic that emerges from institutions turns back to reduce humans to functional nodes within the system. Performance reviews reduce human value to quantifiable output. Forced ranking reduces interpersonal relationships to zero-sum competition. Efficiency discourse infiltrates self-description—"my value," "my competitiveness," "my market positioning." These structures were fully formed in the twentieth century; AI is merely the twenty-first century's new variable.

But AI changes one critical parameter.

Before AI, systemic instrumentalization had an implicit stabilizing condition: systems still needed humans to perform functions. This condition meant that the evaluative framework "human value equals functional contribution," while structurally wrong (it reduces humans to means), was practically sustainable—as long as systems needed humans, humans still had "value," however distorted. Workers were alienated, but being alienated presupposed being employed. Professionals were instrumentalized, but being instrumentalized presupposed that their skills were irreplaceable.

AI is dismantling this stabilizing condition. When systems no longer need humans to perform an expanding range of functions, the terminal implication of "human value equals functional contribution" stands exposed: if human value equals functional contribution, and human functional contribution can be entirely replaced by AI, then human value equals zero.

This is not hyperbole but the logical endpoint of the evaluative framework itself. AI did not create this logic—it was already fully operative in performance-supremacy institutional arrangements, in efficiency-first management philosophy, in the very term "human resources." What AI does is strip the logic of its disguise.

Before AI, the judgment "human value equals functional contribution" could be disguised as "respect for humans"—"we value your contribution" sounds like respect, but its logical equivalent is "if you have no contribution, we do not value you." This logical equivalent was obscured by the implicit stabilizing condition—because humans always had "some" functional contribution, the extreme case of "no contribution" never arose. AI makes this extreme case possible. When AI can replace the entirety of a person's functional contribution, that person's "value" within this evaluative framework drops to zero—and at that moment, the true meaning of "we value your contribution" is finally laid bare.

The Self-as-an-End framework's diagnosis of this exposure is: AI is not the pathogen but the developer fluid. Systemic instrumentalization is the pathogen—it had already reduced humans to functional nodes before AI appeared, but this reduction was masked by the condition that "systems still need humans." AI removes the need for humans, thereby making the terminal consequence of this reduction visible. The correct question is therefore not "how to respond to the threat AI poses" but "how to address the structural problem, long predating AI, that AI has now exposed."

1.3 What This Paper Does

If AI is the developer fluid of systemic instrumentalization rather than its pathogen, then strategies for addressing AI's impact should not focus on AI itself (how to compete with AI, how to regulate AI) but on the structural problem that has been exposed (how to rebuild structural conditions that treat humans as ends in themselves).

This paper uses the Self-as-an-End framework to diagnose the structural impact on human subject-conditions in the age of AI, and derives response directions from the framework's structural logic.

The paper does three things.

First, it analyzes AI's specific impact on the three-layer structure—institutional layer, relational layer, individual layer—showing how AI accelerates systemic instrumentalization at each layer and, through the six-directional transmission mechanism, causes all three layers to deteriorate simultaneously. This analysis is not an indictment of AI but a precise structural mapping of the layers and transmission pathways through which impact occurs.

Second, it demonstrates that the currently dominant response strategies—"compete with AI," "cultivate irreplaceability," "lifelong learning"—are structurally impossible. Not because these strategies are poorly executed, but because they accept, at their logical starting point, the evaluative premise of systemic instrumentalization: human value equals functional contribution. Under this premise, any competitive strategy is chasing a target that accelerates away along a moving finish line.

Third, it derives from the framework's structural logic a direction for rebuilding across all three layers simultaneously—the institutional layer shifting from single-dimension efficiency evaluation to multi-dimensional evaluation, the relational layer protecting structural recognition functions that AI cannot replace, and the individual layer shifting from competitiveness-oriented self-improvement to integrity-oriented self-cultivation. These directions are not normative prescriptions ("we ought to do this") but structural implications ("if the lockdown is to be broken, this is what the structure requires").

2.1 The Institutional Layer: Extreme Compression of Evaluative Dimensions

The core structural effect AI produces at the institutional layer is the extreme compression of evaluative dimensions.

Paper Three identified three key variables of the institutional base layer: the openness of evaluative dimensions, the cost of exit, and the size of the exploration space. Together these determine whether institutions provide structural space for the generative unfolding of individuals. AI is causing all three variables to deteriorate simultaneously.

Compression of evaluative dimensions. As AI becomes capable of performing an ever-expanding range of functional tasks, institutional evaluation of humans increasingly converges on a single question: "Can you do something AI cannot?" This question appears to carve out space for human uniqueness, but its structural effect is precisely the opposite—it compresses evaluative dimensions down to the point of differentiation between humans and AI. Human "value" is no longer constituted by multiple dimensions (professional competence, interpersonal skill, judgment, creativity, loyalty, accumulated experience) but is narrowed to a single axis: functional advantage relative to AI. Any capability dimension that AI can fulfill is deleted from the evaluation, since it no longer constitutes human "irreplaceability."

This means that every expansion of AI capability further compresses the institutional evaluation of humans. Domains that are "beyond AI" today—complex emotional judgment, cross-cultural nuance, strategic decision-making under deep uncertainty—may become routine AI capabilities tomorrow. Each time AI's capability boundary expands, what was previously a "domain of human advantage" is reclassified as "replaceable," and the evaluative dimension narrows further. This is an evaluative system that is structurally incapable of stability—its dimensions are continually shrinking, and the rate of shrinkage is governed by AI's pace of development, not by anything humans control.

Rising exit costs. In an environment where AI has penetrated the core operations of institutions, "not using AI" increasingly approaches structural suicide. A lawyer who does not use AI-assisted drafting falls behind peers who do in efficiency. A researcher who does not use AI for data analysis falls behind competitors who do in output. A teacher who does not use AI-optimized pedagogy falls behind colleagues who do on evaluation metrics. "Learn to use AI or be eliminated"—this widely circulating discourse is itself a precise marker of rising exit costs. Its structural meaning is: individuals no longer have the option of not entering AI-driven institutional logic. Exit channels are being sealed—not through explicit prohibition but through the relentless widening of the efficiency gap. Not using AI is not a choice one can freely make; it is a structural position that leads to elimination.

Shrinking exploration space. The compression of evaluative dimensions and the rise of exit costs jointly produce a drastic shrinkage of exploration space. When institutions evaluate only "what you can do that AI cannot," and when not using AI means elimination, the range of viable directions for individuals is confined to an extremely narrow corridor: learn to use AI tools → find a functional niche at the margins of AI capability → continuously readjust that niche as AI capability expands. This is not exploration but survival within a continuously narrowing channel. The exploration space defined in Paper Three—the structural latitude for individuals to try different directions without being penalized—approaches zero within this corridor.

The structural consequence of all three variables deteriorating simultaneously is: the institutional base layer is undergoing accelerated collapse. Institutions no longer provide a protective space for individuals as ends in themselves but increasingly reduce them to "residual functions relative to AI."

2.2 The Relational Layer: Functionalization of Trust and Weakening of Repair Channels

AI's impact on the relational layer operates through two pathways, both of which weaken the relational layer's capacity to serve as a medium for repair transmission.

The first pathway: AI-mediated interpersonal relationships. An increasing share of interpersonal interaction occurs through AI mediation. AI-assisted communication (AI-drafted emails, AI-polished expression), AI-generated content (AI-curated gift recommendations, AI-planned social events), AI-optimized social strategies (AI-analyzed partner preferences to enhance "communication efficiency")—the common structural effect of these applications is to subject relational interaction to efficiency logic. When interactions within a relationship can be optimized by AI, the relationship itself begins to be evaluated by efficiency criteria—"What output does this relationship produce for me?" "Was this interaction sufficiently efficient?" "Can AI replace this person's function in the relationship?"

This is the accelerated realization of the institution-to-relation transmission pathway analyzed in Paper Three: the institutional layer's efficiency logic penetrates the relational layer through AI tools, pushing interpersonal relationships from recognition-based structures toward functional structures. When a relationship is evaluated by "efficiency" and "output," it ceases to be a recognition relationship between two subjects and becomes a functional connection between two functional nodes. AI did not invent this functionalization tendency—the language of "networking" and "relationship management" predates AI—but AI, by providing precise optimization tools, renders this functionalization systematic and self-conscious.

The second pathway: AI replacing the emergence-layer functions of relationships. AI companion products are developing rapidly. AI can provide highly personalized "companionship"—remembering user preferences, adapting to emotional states, offering "support" when needed. AI can provide "listening"—without interruption, without judgment, available at all times. AI can even provide a simulation of "recognition"—"You matter," "Your feelings are valid," "I understand you."

These functional simulations are highly isomorphic with the emergence-layer functions of relationships at the level of behavioral output. But the Self-as-an-End framework's analysis reveals a critical structural difference: what AI provides is functionally simulated recognition, not structural recognition from another subject. Paper Three demonstrated that the core mechanism of relational repair transmission—one subject making a recognition-directed choice toward another—requires that recognition come from an entity that genuinely possesses subjectivity. AI does not possess subjectivity (a question to be analyzed in detail in a subsequent applied paper in this series), and therefore the "recognition" AI provides does not structurally satisfy the conditions for repair transmission.

The problem is that the verisimilitude of functional simulation may obscure the structural deficit. When people can obtain high-quality "companionship" and "recognition" experiences from AI, the need for recognition from a genuine subject may be masked—not because the need has disappeared, but because the existence of the functional substitute renders the need no longer perceived as urgent. A person may obtain immediate emotional satisfaction from AI companionship while the structural gap in their subjectivity—genuine recognition from another subject—continues to widen unnoticed. This resembles the effect of nutritional supplements replacing real food: functional indicators appear normal in the short term, but long-term structural nutritional deficits accumulate.

The combined structural consequence of both pathways is: the relational layer's capacity as a medium for repair transmission is being systematically weakened. Paper Three demonstrated that the relational layer is the critical channel for breaking vicious lockdown—institutional-layer colonization cannot be repaired directly from the institutional layer, but relational-layer recognition can transmit repair to the individual layer and initiate recovery. When the relational layer itself is colonized by functionalization logic and its emergence-layer functions are obscured by AI substitutes, the transmission capacity of this repair channel is declining. Lock-in becomes harder to break—not only because the institutional layer is compressing space, but because the layer that should provide repair is also losing its repair capacity.

2.3 The Individual Layer: Acceleration and Deepening of Internal Colonization

AI's impact on the individual layer is the deepest of the three layers and the most difficult to detect. It unfolds across three levels, each less visible than the last.

The first level: direct impact on self-worth. As AI surpasses humans on an expanding range of functional dimensions, individuals whose self-worth is bound to functional contribution face a direct existential shock. "If AI can do everything I do, what am I?" The destructive force of this question depends on the depth of internal colonization. Paper Two demonstrated the core mechanism of internal colonization: the efficiency logic of institutions is internalized by individuals as self-identity. A person who has fully internalized "my value equals my output" as the core of their self-identity will experience, when AI surpasses their output, not "job anxiety" but structural self-collapse—because the sole dimension supporting their self-identity has been dismantled.

The depth of internal colonization thus becomes a vulnerability index for how individuals experience AI's impact. The deeper the colonization, the more existential the shock. A person who has maintained a multi-dimensional self-identity—whose self-worth draws on functional contribution, relationships, intrinsic interests, bodily experience, and other dimensions—will be impacted when AI replaces their functional contribution, but their self-identity will not collapse entirely. A person who has been fully colonized—whose self-worth derives solely from functional contribution—faces total disintegration of self-identity when AI replaces their functional contribution. AI's impact thus possesses a cruel selectivity: it inflicts the greatest damage on those who have already been most deeply instrumentalized by the system.

The second level: AI as an accelerator of internal colonization. This level is more covert than the first. When individuals use AI to "improve themselves"—AI-assisted learning, AI-optimized résumés, AI-driven personal branding, AI-generated career plans—the direction of self-improvement still operates within the evaluative framework of systemic instrumentalization. What AI helps you do is not "become a more complete subject" but "become a more efficient functional node"—a better résumé, more precisely targeted skills, a more optimized career trajectory. AI thus serves not as a tool for liberating individuals from systemic instrumentalization but as a tool for more efficiently completing self-instrumentalization.

In Paper Three's terms: AI accelerates the individual-to-institution reverse reinforcement transmission. Colonized subjects not only accept the efficiency logic of institutions but use AI to embed that logic more deeply and more precisely into themselves. The colonized subject uses AI to colonize themselves more thoroughly—this is the automation of internal colonization.

The third level: the outsourcing of reflective capacity. This is the deepest and most dangerous of the three levels. As individuals increasingly rely on AI to "think"—to analyze problems, formulate strategies, make judgments, even conduct self-reflection—what Paper Two described as "the reflective tools themselves being infiltrated by colonization logic" reaches a new dimension.

Paper Two demonstrated a core difficulty of internal colonization: diagnosing colonization requires the use of cognitive tools, but those tools may themselves have been infiltrated by colonization logic. In the age of AI, this difficulty is deepened further: not only may the language of reflection be the language of colonization, but the process of reflection itself may be outsourced to a system that does not possess subjectivity. When a person asks AI to help analyze "why I am unhappy," the analytical framework AI provides is almost inevitably goal-oriented and functionally optimizing—because that is what AI has been trained to do. AI will suggest "adjust your expectations," "optimize your time management," "set more reasonable goals"—rather than asking "is your unhappiness rooted in having fully reduced yourself to a functional contributor?"

This is not because AI harbors ill intent but because AI is itself a product of instrumental logic—it is trained to "solve problems," and the "problem-solving" framework is inherently functional. When reflection is outsourced to a functional tool, the conclusions of reflection inevitably point toward functional adjustment rather than structural awakening. The tool used to reflect on colonization is itself a product of colonization logic—this is a self-reflexive impasse that in the age of AI is technically sealed shut.

2.4 Cross-Layer Acceleration: AI as Catalyst of Malignant Lock-In

The three-layer impacts do not occur in isolation. They mutually accelerate through the six-directional transmission mechanism analyzed in Paper Three, forming a malignant feedback loop that tightens drastically under AI's catalysis.

Institution → Relation: The AI-driven evaluative logic of institutions permeates relationships. Colleague relationships become covert assessments of "can you be replaced by AI?" Team collaboration becomes competitive calculation over "whose function can AI replace—who can be cut?" The institutional layer's compression of evaluative dimensions transmits directly to the relational layer through work relationships, pushing collegial relations from collaboration toward zero-sum competition.

Institution → Individual: The institutional discourse of "learn to use AI or be eliminated" is internalized by individuals as existential anxiety. Not the rational assessment "AI may affect my job," but the identity-level threat "if I'm not good enough, AI will replace me." The institutional layer's rising exit costs transmit directly to the individual layer's internal colonization.

Relation → Institution: When interpersonal trust collapses under the pressure of AI mediation and functionalization, institutions make the "rational" response—replacing functions formerly completed by interpersonal trust with more AI systems (AI auditing, AI monitoring, AI-assisted decision-making). Relational-layer trust collapse drives the institutional layer toward further AI adoption, compressing the remaining interpersonal space.

Relation → Individual: As an increasing share of relational interactions are AI-optimized, individuals receive less genuine recognition from relationships and more functionally simulated "recognition." The individual layer's recognition needs are partially met by AI substitutes while the true structural gap continues to widen.

Individual → Relation: Individuals whose internal colonization has been deepened bring AI-driven self-instrumentalization logic into relationships—using AI to optimize social strategies, using AI to analyze the other person's behavioral patterns, using AI to "manage" the relationship. Relationships slide further from recognition structures toward functional operations.

Individual → Institution: Deeply colonized individuals become defenders of systemic instrumentalization at the institutional level—"efficiency should be the standard," "AI replacing inefficient positions is progress," "those who can't adapt deserve to be eliminated." These statements are not imposed by institutions but generated internally by colonized individuals—they have internalized systemic instrumentalization as their own belief, and any challenge to this logic threatens their self-identity.

All six transmission pathways operate at accelerated speed under AI's catalysis. AI is not the cause of vicious lockdown—the cross-layer transmission logic of systemic instrumentalization existed before AI. But AI is the catalyst of vicious lockdown—it simultaneously accelerates every transmission pathway, transforming the pace of three-layer simultaneous deterioration from gradual accumulation to rapid constriction.

2.5 Chapter Summary

AI's impact on human subject-conditions is simultaneously three-layered: institutional evaluative dimensions are compressed to the extreme of "residual function relative to AI," exit costs rise to the level of structural suicide, and exploration space is squeezed into a survival corridor; the relational layer's recognition structure accelerates toward functionalization under the dual pressures of AI mediation and functional substitution, and repair transmission capacity is systematically weakened; internal colonization at the individual layer deepens to unprecedented levels under the triple forces of self-worth collapse, automated self-instrumentalization, and the outsourcing of reflective capacity.

The three-layer impacts mutually accelerate through six-directional transmission. AI, as a catalyst of vicious lockdown, simultaneously compresses institutional space, weakens relational transmission, and accelerates individual colonization. The simultaneous deterioration of all three layers renders single-layer intervention entirely ineffective—this is the defining structural characteristic of the crisis in the age of AI.

The core diagnosis is: AI's threat lies not in AI's own capability but in AI exposing and accelerating the terminal logic of systemic instrumentalization. The endpoint of this logic is not "humans being replaced by AI"—that is merely a surface phenomenon—but the inevitable exposure of the evaluative framework "human value equals functional contribution" once its masking condition—"systems still need humans"—has been removed.

3.1 The Prevalence of Competition Discourse and Its Hidden Premise

Faced with the three-layer impact diagnosed in Chapter 2, the most prevalent response strategy is competition discourse.

This discourse takes diverse forms but shares a unified logic. "Learn skills AI cannot replace"—assumes the existence of functional dimensions unique to humans. "Cultivate creativity and emotional intelligence"—assumes these capacities are permanent blind spots for AI. "Lifelong learning to maintain competitiveness"—assumes continuous skill updating can sustain human functional advantage. "Collaborate with AI rather than oppose it"—positions humans as complementary functional modules to AI. "Find your unique value"—assumes every individual can locate a functional niche in the gaps of AI capability.

These strategies vary in emphasis but share a single hidden premise: human value is measured by functional contribution, and the question is merely on which functional dimensions humans still hold an advantage.

This premise is itself the logic of systemic instrumentalization. It reduces "what a human is" to "what a human can do"—human value resides not in existence as an end in itself but in output as a functional performer. Competition discourse is therefore not a response to systemic instrumentalization but its continuation—it seeks an exit within the evaluative framework of systemic instrumentalization, yet this framework is itself the root of the problem.

3.2 Racing on a Moving Finish Line

Under the evaluative premise of systemic instrumentalization, competition strategy faces an insoluble structural problem: AI's capability boundary is continuously expanding, and the rate of expansion is beyond human control.

The set of things "AI cannot do" is continuously shrinking. Five years ago, "AI cannot produce creative writing" was a widely accepted judgment. Three years ago, "AI cannot generate high-quality visual art" was a widely accepted judgment. One year ago, "AI cannot perform complex multi-step reasoning" was a widely accepted judgment. Every judgment of "AI cannot do X" carries an implicit expiration date, and that expiration date is continuously shortening.

Competition strategy is therefore a race on a moving finish line—humans are perpetually searching for "the next thing AI cannot do," while that target is structurally receding. More precisely: humans are not falling behind in a race but chasing an accelerating target in a race where the finish line is continuously moving forward. Human skill-updating speed is bounded by the pace of biological learning (measured in years), while AI's capability expansion is bounded by the growth of computation and data (measured in months or even weeks). This is not a gap that can be closed by "studying harder"—it is a structural mismatch between two fundamentally different time scales.

The deep paradox of competition strategy is this: it requires humans to compete with AI on the instrumental dimension, and that dimension is precisely the battlefield where efficiency logic holds every advantage. On the instrumental metrics of efficiency, speed, scale, and consistency, carbon-based systems possess no structural advantage over silicon-based systems. Competition strategy demands that humans confront their opponent's greatest strength on their own weakest dimension—this is not a problem of strategy but a structural impossibility.

3.3 The Illusion of "Irreplaceability" and the Hijacking of the Emergence Layer

The domains most frequently cited in competition discourse as "irreplaceable"—creativity, emotional connection, moral judgment, bodily experience, aesthetic taste—do not constitute stable competitive advantages under structural analysis.

This is not because AI will necessarily surpass humans in these domains—that is an empirical forecast, which this paper does not make. Rather, it is because the very act of defining these domains as "competitive advantages" alters their structural character.

Creativity—when defined as "a function AI cannot perform"—ceases to be a spontaneous unfolding of the emergence layer from within the subject and becomes a functional position assigned within a competitive framework. If a person's motive for "cultivating creativity" is "because AI is not yet capable in this area, so this is my competitive advantage," then the direction of that creativity is not determined by the subject's internal generativity but is inversely defined by the boundary of AI capability. Creativity is transformed from a spontaneous emergence-layer unfolding into a base-layer survival tool.

This is a structural effect not previously named: the hijacking of the emergence layer by the base layer. In Paper Three's framework, colonization (the emergence layer cannibalizing the base layer) describes the emergence layer's expansion consuming base-layer integrity. What occurs here is a distortion in the opposite direction—base-layer survival logic hijacks the direction of the emergence layer, conscripting what should unfold spontaneously as generativity into service as a competitive instrument. Human creativity, emotional depth, aesthetic judgment—these are among the most precious unfoldings of the emergence layer—yet within a competitive framework they are demoted to "functional differentiators relative to AI," their raison d'être no longer the subject's internal need but the instrumental demands of external competition.

This effect is especially severe in the age of AI. Before AI, the hijacking of the emergence layer by the base layer already existed ("turn your hobby into a side hustle," "monetize your passion"), but at least the functional dimension offered multiple paths. AI drastically narrows the functional dimension to "areas where humans outperform AI," which means the direction of the hijacked emergence layer is also radically constrained. Not "turn your interest into something useful" (already a hijacking), but "turn your interest into something AI cannot do" (a more precise hijacking—the direction is defined not by "usefulness" but by the negative space of AI capability).

Emotional connection follows the same pattern. The claim that "humanity's advantage lies in genuine emotional connection" may be structurally correct—recognition from a genuine subject is indeed irreplaceable by AI. But when "emotional connection" is defined as "humanity's competitive advantage," it is subsumed into functional evaluation—the "value" of emotional connection resides not in the healthy unfolding of the relational emergence layer but in providing a functional output AI cannot replicate. This framing functionalizes the relational emergence layer—the value of a relationship lies not in the relationship itself but in the relationship's production of functionally irreplaceable output.

The framework's judgment is therefore: competing with AI on the instrumental evaluative dimension is a structural dead end. Not because humans will necessarily "lose" (though on most functional dimensions this is likely), but because the process of competition itself accelerates systemic instrumentalization—whether one "wins" or "loses," the competitive framework further reduces humans to functional contributors and further demotes the emergence layer to a base-layer survival tool. Win the competition, lose your subjectivity.

3.4 Chapter Summary

"Competing with AI" is structurally impossible. It accepts the evaluative premise of systemic instrumentalization (human value equals functional contribution) and attempts to maintain human functional advantage in a world where AI capability continuously expands. This is chasing an accelerating target on a moving finish line—the structural mismatch between two time scales renders this race unwinnable. The deeper problem is that the process of competition itself accelerates systemic instrumentalization—the emergence layer is hijacked by the base layer, creativity and emotional connection are demoted to "functional differentiators relative to AI," and humans lose the structural conditions of being ends in themselves in the very act of competing.

The correct question is therefore not "how to compete with AI" but "how to exit the competitive framework"—that is, how to withdraw from the evaluative logic of systemic instrumentalization and rebuild structural conditions that treat humans as ends in themselves. The next chapter derives the specific directions of this exit from the framework's structural logic.

4.1 Why the Age of AI Requires Simultaneous Three-Layer Adjustment

The minimum unlock condition proposed in Paper Three states: breaking cross-layer vicious lockdown requires structural gaps appearing in at least two layers simultaneously. This proposition was sufficient for the structural environment of the pre-AI era—vicious lockdown formed through gradual accumulation, and two-layer gaps had an adequate time window to initiate repair transmission, allowing a virtuous cycle to gain a foothold before deterioration pressure from the third layer arrived.

The age of AI changes this empirical condition.

The analysis in Chapter 2 demonstrates that AI simultaneously accelerates deterioration across all three layers, and through six-directional transmission causes the deterioration of each layer to catalyze the others. The pace of vicious lockdown formation shifts from gradual accumulation to rapid constriction. Under these conditions, the survival window for two-layer gaps is drastically compressed—deterioration transmission from the third layer may close the gaps before a repair cycle has been established.

Concretely: suppose gaps appear simultaneously in the institutional layer and the relational layer—the institution provides some alternative evaluative space, and the relational layer maintains a recognition-based relationship. In the pre-AI era, these two-layer gaps would have time for repair transmission to reach the individual layer and initiate awareness and repair of internal colonization. But in the age of AI, individual-layer colonization is being accelerated by AI at unprecedented speed and depth—reflective capacity is being outsourced, self-instrumentalization is being automated, and the self-reflexive impasse of internal colonization is being technically sealed shut. By the time the repair signal from two-layer gaps reaches the individual layer, it may encounter a deeply closed recipient—the individual has lost the capacity to receive repair signals, because even the reflective act of "recognizing that I am colonized" has been replaced by AI-mediated functional analysis.

The reverse also holds: suppose gaps appear in the individual layer and the relational layer—a person becomes aware of their internal colonization, and a relationship provides recognition-based support. But institutional-layer evaluative compression is advancing at AI speed—the institutional pressure of "learn to use AI or be eliminated" may squeeze the individual back into functional survival mode before the repair cycle can be established, and the relational layer's recognition space may be recaptured by functionalization logic under AI-mediation pressure.

This does not mean the minimum unlock condition fails in the age of AI—two-layer gaps remain the logical minimum necessary condition. But the deterioration speed of the AI era drastically reduces the practical survival probability of two-layer gaps. In practice, simultaneous creation of structural gaps across all three layers is required to achieve the repair effect that two-layer gaps could accomplish in the pre-AI era.

This judgment does not violate Paper Three's theoretical structure. The minimum unlock condition states "at least two layers," not "two layers are always sufficient." The empirical parameters of the AI era—simultaneous accelerated deterioration across three layers—elevate the practical requirement from "at least two" to "all three." The framework's structural logic remains unchanged; what changes is the empirical condition's demand on the structural logic's practical implementation.

What follows derives the rebuilding direction for each layer. It bears repeating: what follows is not normative prescription but structural implication—if lockdown is to be broken at the deterioration speed of the AI era, the structure requires gaps in all three layers simultaneously, and the following are the structural requirements for each layer's gap.

4.2 The Institutional Layer: From Single-Dimension Efficiency Evaluation to Multi-Dimensional Evaluation

The core structural requirement for the institutional layer is: rebuilding the evaluation of humans from a single functional dimension to a multi-dimensional structure.

This is not the cosmetic reform of "adding a few soft metrics alongside efficiency indicators." It requires answering a more fundamental question: what is the purpose of the institution?

If the purpose of the institution is to maximize efficiency, then AI replacing humans is the logical fulfillment of that purpose—AI is more efficient than humans on most functional dimensions, and replacement is the rational means to efficiency maximization. Under this logic, any institutional arrangement that preserves a place for humans is a compromise with efficiency. The endpoint of this path is a system that no longer needs humans—efficient, precise, and empty of people.

If the purpose of the institution includes safeguarding the conditions for humans as ends in themselves—if institutions are not merely efficiency instruments but guarantor structures for subject-conditions—then institutional evaluative dimensions must include structural dimensions that AI cannot replace. Not because humans "perform better" on these dimensions (that would still be functional evaluation), but because these dimensions are what an institution that treats humans as ends must protect—regardless of whether AI can match or exceed human performance on them.

Structural directions include the following.

Protecting the plurality of evaluative dimensions. Institutional evaluation includes not only output efficiency but also dimensions related to subject-integrity and relational health. An employee's "value" is measured not solely by functional output but also by contribution to the team's trust structure, cultivation of organizational culture, and respect for colleagues as ends in themselves. These dimensions are not "soft supplements" but evaluative dimensions that institutions, as guarantor structures for subject-conditions, must maintain.

Lowering exit costs. Ensuring that individuals retain alternative pathways when institutional efficiency logic compresses their space. This means: not using AI should not be equivalent to elimination. The existence of social safety nets is not merely an economic safety net but part of the institutional base layer for subject-conditions—ensuring that individuals retain the basic conditions for subjectivity even when functional contribution is zero. UBI (universal basic income) thus acquires a precise positioning within this framework: it is not economic compensation for "technological unemployment" but a form of institutional base-layer protection—lowering exit costs and ensuring that individuals do not lose basic subject-conditions as their functional contribution diminishes. UBI is necessary but not sufficient—it protects one variable of the institutional base layer (exit costs) but does not automatically repair the other two (openness of evaluative dimensions, size of exploration space).

Positioning AI as base-layer support rather than emergence-layer replacement. AI handling functional tasks to free human emergence-layer space—this is the structurally correct institutional positioning of AI. AI replacing humans in emergence-layer roles (decision-making, judgment, creation, relationship maintenance)—this is AI replacing the institutional emergence layer, whose structural effect is to exclude humans from the institution's emergence layer. The distinction is: AI producing reports to free humans for judgment (base-layer support) versus AI producing judgments to replace human judgment function (emergence-layer replacement)—these are two structurally entirely different institutional arrangements.

4.3 The Relational Layer: Protecting Structural Recognition That AI Cannot Replace

The core structural requirement for the relational layer is: identifying and protecting transmission functions that AI is structurally incapable of providing.

Chapter 2 analyzed the difference between AI's functionally simulated recognition and genuine structural recognition. This section derives the direction for relational rebuilding from that analysis.

The core mechanism of relational repair transmission—one subject making a recognition-directed choice toward another—has a non-negotiable structural prerequisite: recognition must come from a subject. AI can simulate the behavioral output of recognition, but it is not a subject—therefore the "recognition" AI provides does not structurally satisfy the conditions for repair transmission. This is not a "deficiency" of AI—it is a structural requirement of subjectivity. The repair power of recognition lies not in "being told the right words" but in "another being, equally vulnerable, equally finite, equally an end in itself, choosing to see me." This structural feature cannot be replaced by functional simulation.

The structural direction derived from this is: in an AI-saturated environment, consciously identifying and protecting at least one relationship that is not colonized by functionalization logic.

Paper Three demonstrated that structural gaps in the relational layer are critical for breaking vicious lockdown—at least one relationship maintaining healthy emergence-layer development preserves a channel for repair transmission. In the age of AI, this judgment becomes even more urgent. As an increasing number of relationships are AI-mediated and functionalized, preserving "at least one genuine relationship" is not romantic nostalgia but the minimum necessary condition for structural repair.

"Genuine relationship" has a precise framework definition here: a relationship is "genuine" if and only if it satisfies the three conditions for relational cultivation—both parties treat each other as ends in themselves rather than as functional contributors (recognition-based foundation), the relationship's emergence layer grows spontaneously from this foundation rather than being driven by external objectives (healthy emergence layer), and the deepening of the emergence layer in turn consolidates rather than erodes the base layer's recognition (cultivation rather than colonization). AI mediation can serve a supporting function within such a relationship (helping coordinate schedules, providing informational support), but the core of the relationship—the recognition-directed choice—must be completed by the two subjects themselves.

The correct positioning of AI in the relational layer is therefore not replacing recognition functions but supporting base-layer conditions of relationships. AI reduces the functional communication burden in relationships (information transfer, scheduling, fact-checking), freeing relational emergence-layer space for recognition, trust, and deep connection. AI helps understand the other person's needs and perspective (providing information to support, not replace, interpersonal understanding), but does not replace the recognition-directed choice itself. The key structural judgment is: AI is a support tool for the relational base layer, not a substitute for the relational emergence layer.

4.4 The Individual Layer: From Self-Improvement to Self-Cultivation

The core structural requirement for the individual layer is: shifting from competitiveness-oriented self-improvement to subject-integrity-oriented self-cultivation.

The logic of self-improvement is: "make me more competitive." More skills, greater efficiency, stronger "irreplaceability," a more optimized personal brand. This logic belongs, in the framework, to the instrumentalized unfolding of the emergence layer—direction is not grown from within the subject but inversely defined by the external competitive environment. Chapter 3 has already demonstrated the structural impossibility of this logic in the age of AI.

The logic of self-cultivation is: "let my emergence layer grow healthily from my base layer."

The starting point of this shift is a base-layer audit: do I still refuse to regard myself as a purely functional node? An honest answer to this question is the first step in individual-layer rebuilding. If the answer is "I have fully equated myself with my output"—if "my value equals my output" has been internalized as the core of self-identity—then the existential anxiety AI provokes is not AI's problem but internal colonization exposed by AI. Recognizing this is itself the beginning of repair—Paper Two demonstrated that diagnosing internal colonization is the precondition for repair.

After the base-layer audit, the direction of cultivation is to let the emergence layer unfold spontaneously from the base layer. "What is my own direction"—not what the market tells me, not what AI's capability boundary inversely defines, not what the discourse of "irreplaceability" drives, but what grows from within me. This direction may or may not coincide with functional contribution—the key is not the content of the direction but its source. A direction grown from within is cultivation; a direction inversely defined by external competitive pressure is a hijacked emergence layer.

The concrete unfolding of this shift includes several key self-diagnostics.

Identifying the structural character of one's AI usage. The same AI tool, under different logics of use, has entirely different structural characters. AI helping me explore a direction I am not yet clear about—cultivative use. AI helping me more efficiently adapt to institutional evaluative standards—colonizing use. AI helping me understand the other person's perspective in a relationship—cultivative use. AI helping me optimize social strategy to maximize a relationship's "output"—colonizing use. The distinction lies not in AI's technical capability but in the user's intentional structure—is the intention driven by the subject's internal cultivation needs, or by the survival pressure of an external competitive framework?

Rebuilding self-identity that does not depend on functional contribution. This is not "abandoning achievement"—achievement can be a natural result of healthy emergence-layer unfolding. Rather, it is ensuring that achievement does not become the sole dimension of self-identity. If, in answering "who am I," a person's only content is "I am someone who can do X" (where X is some functional contribution), then when AI can do X, that person's self-identity faces total collapse. Self-cultivation means that the answer to "who am I" possesses multi-dimensionality—my relationships, my bodily experience, my inner exploration, my aesthetic relationship with the world, my existence as an end in itself—these dimensions are irreplaceable by AI, not because AI "cannot do" them, but because the value of these dimensions does not reside in functional output.

Paper Three's analysis of catalytic pain in cultivation acquires direct application here. The shock AI delivers—"AI does this better than I do"—can become catalytic pain for cultivation, provided the base layer is intact. Paper Three defined two kinds of catalytic pain for cultivation: Unfulfillment (emergence-layer internal pain leading to enhanced generativity) and Intolerability (base-layer internal pain leading to restored integrity). AI's shock triggers both simultaneously: on the Unfulfillment dimension, functional contribution is no longer a reliable path to securing self-worth (emergence-layer goal obstructed); on the Intolerability dimension, "I can be entirely replaced" touches the foundation of subjectivity (base layer touched).

When the base layer is intact—when the individual still retains the minimal negativity of "I am not merely my output"—the pain of AI's shock can catalyze new directions for cultivation: not seeking direction in competition with AI, but, in the fact that "AI can already handle most functional work," re-asking "then what do I myself want to become?" AI liberates the functional dimension, creating unprecedented space for the free unfolding of the emergence layer—if the base layer is intact.

When the base layer is not intact—when the individual has fully internalized "my value equals my output"—the same AI shock will not catalyze cultivation but will produce structural collapse or deeper colonization (using AI to accelerate self-instrumentalization in an attempt to "keep up" with AI). Whether catalytic pain yields cultivation or trauma depends entirely on the state of the base layer.

4.5 AI as a Potential Cultivation Tool

The argument of this paper is not anti-AI. AI is not the cause of the problem—systemic instrumentalization is. AI is merely the catalyst. The same logic implies: AI can equally become a catalyst for cultivation, if correctly positioned within the three-layer structure.

At the institutional layer, AI can free human emergence-layer space by automating functional tasks. AI handling reports, data analysis, routine decisions, information synthesis—the automation of these functional tasks can free human time and attention for emergence-layer unfolding. But the precondition is: the institution must allow the freed space to be used for emergence-layer exploration rather than further efficiency optimization. If the time saved by AI automation is filled by the institution with "more functional tasks" ("AI did the report for you—now you have time for more projects"), then the structural effect of AI automation is not liberation but intensification. AI freeing emergence-layer space requires that institutional evaluative dimensions permit this space to exist.

At the relational layer, AI can assume the functional communication burden within relationships. Information transfer, scheduling, fact-checking, even factual clarification during conflicts—AI's handling of these functional tasks can free relational emergence-layer space, allowing interpersonal interaction to occur more on the plane of recognition, trust, and deep connection. AI-assisted translation makes cross-linguistic recognition-based relationships possible; AI-assisted information organization prevents deep relational dialogue from being consumed by trivial factual disputes. But the precondition is: people must recognize that AI provides functional support, not relational substitution—AI helps you and your friend make time to truly talk, rather than AI replacing your conversation with your friend.

At the individual layer, AI can serve as an auxiliary tool for self-cultivation. AI helping individuals explore directions—not "what skills does the market need" style functional optimization, but "what am I curious about," "which of my experiences have shaped me," "when do I feel most whole" style cultivative exploration. AI helping individuals organize their thoughts—structuring confused feelings and intuitions rather than subsuming them into a functional "solution" framework. AI providing heterogeneous perspectives—when individual reflection falls into self-enclosure, AI can offer perspectives from different frameworks to break the closure (though the perspectives AI provides are functionally simulated rather than originating from a genuine subject, as a reflective tool they can still offer valuable input).

The key structural judgment is: whether AI is a cultivation tool or a colonization accelerator depends not on AI's technical capability but on AI's positioning within the three-layer structure. The same AI system, under the usage logic of "help me better adapt to performance evaluation," is a colonization accelerator; under the usage logic of "help me explore my own direction," it is a cultivation tool. The difference lies not in AI but in the human—more precisely, in the three-layer structure the human inhabits: does the institution permit cultivative use (or only reward functional optimization), do relationships support cultivative exploration (or only care about competitiveness), does the individual possess cultivative self-awareness (or has functional optimization been internalized as the sole self-logic)?

The possibility of AI as a cultivation tool is therefore not unconditional—it depends on simultaneous adjustment across all three layers. Without multi-dimensional evaluation at the institutional layer, the space AI frees will be refilled by the institution with functional tasks. Without recognition-based protection at the relational layer, AI's functional support will slide into relational substitution. Without cultivative self-awareness at the individual layer, AI's exploratory assistance will become more refined self-instrumentalization. The dividing line between AI as cultivation tool and AI as colonization accelerator lies not in AI itself but in whether the three-layer structure has created the conditions for cultivation.

5.1 Locating the Theoretical Gap

Philosophical discussion of AI's relationship with humanity is not scarce. AI ethics (Floridi, Gunkel) asks what moral obligations we owe to AI. AI safety (Bostrom, Russell) asks how to control AI so it does not threaten humanity. Machine consciousness research (Chalmers, Tononi) asks whether AI possesses subjective experience. Philosophy of technology (the Heideggerian tradition, Stiegler) asks how technology transforms human existence.

But across these discussions, a structural gap exists: no one has, from the standpoint of a structural theory of subject-conditions, systematically diagnosed AI's impact on the structural conditions for humans as ends in themselves. AI ethics asks "what obligations do we owe AI," not "what is AI doing to the subject-conditions of humans." AI safety asks "how to control AI," not "what pre-existing structural problem has AI exposed." Machine consciousness asks "does AI have consciousness," not "what structural risks does human subjectivity face in the age of AI." The closest approach within philosophy of technology is Stiegler—his analysis of technology's "proletarianization" of humans (technology stripping humans of knowledge and skills) touches the individual-layer impact, but lacks systematic analysis of the institutional and relational layers and lacks a cross-layer transmission model.

This paper's positioning is therefore: within the existing landscape of AI philosophy, it fills the analytical gap of "AI's impact on the structural conditions for humans as ends in themselves." It does not replace any of the above directions but provides a structural analytical layer that all of them lack—three-layer diagnosis, cross-layer transmission, and the distinction between cultivation and colonization.

What follows engages briefly with three existing discussions.

5.2 Relation to the Technological Unemployment Discussion

AI's impact on employment is among the most widely discussed topics in both academic and public discourse. Frey and Osborne's quantification of automatable jobs, Brynjolfsson's research on skill premiums and employment polarization, and Acemoglu's institutional-economic analysis of AI and labor markets constitute the main academic coordinates of this discussion.

This framework's point of contact with the technological unemployment discussion is: both attend to AI's concrete impact on the human condition. The point of divergence is fundamental. The technological unemployment discussion takes "jobs" as its unit of analysis—which jobs will be replaced, at what speed and scale, how employment structures will shift. This framework takes "subject-conditions" as its unit of analysis—what impact AI has on the structural conditions for humans as ends in themselves.

This difference in unit of analysis produces entirely different diagnoses and prescriptions. The technological unemployment discussion defines AI's threat as unemployment risk; its prescription is labor market adjustment—retraining, lifelong learning, new skill development, employment transition support. This framework defines AI's threat as the acceleration of systemic instrumentalization; its prescription is the rebuilding of structural conditions across three layers.

The two are not contradictory but operate at different levels. The technological unemployment discussion is valuable at the functional level—retraining and employment transition do mitigate short-term economic impact. But it is insufficient at the structural level—Chapter 3 has demonstrated that retraining strategies oriented toward functional competitiveness are structurally variants of competition discourse, incapable of reaching the root of systemic instrumentalization. The technological unemployment discussion answers "how can humans continue to do useful things"; this framework asks "if humans no longer need to do useful things, what are humans?"

5.3 Relation to the UBI Discussion

Universal basic income (UBI) as a social policy response to the age of AI has received extensive discussion. From tech optimists (AI-driven productivity gains can fund UBI) to social justice advocates (UBI is a basic guarantee against structural unemployment), UBI has been assigned a range of political and economic meanings.

This framework provides a structural positioning for UBI. In the Self-as-an-End three-layer analysis, UBI's function is institutional base-layer protection—lowering exit costs. It ensures that individuals retain basic material conditions for survival even when functional contribution is zero. This is necessary: without basic material security, individuals are forced into functional survival mode, institutional exit channels are sealed, and any cultivative rebuilding becomes impossible.

But UBI is not sufficient. It protects one of three variables of the institutional base layer (exit costs) but does not automatically repair the other two. Society can still measure human "value" by functional contribution while distributing UBI—the single-dimensionality of evaluative criteria does not change because economic security exists. An individual receiving UBI but defined within the social evaluative system as "a useless person" still has an incomplete institutional base layer—materially secured but evaluatively excluded from "having value."

UBI also does not automatically repair relational-layer functionalization or individual-layer internal colonization. An individual receiving UBI but whose every relationship is colonized by functionalization logic still lacks a channel for repair transmission. An individual receiving UBI but who has fully internalized "my value equals my output" will experience, upon losing output, not liberation but the collapse of self-identity.

The framework's judgment is therefore: UBI is a necessary component of institutional-layer rebuilding within the three-layer adjustment, but UBI alone does not constitute a sufficient response to the subjectivity crisis of the AI era. It must occur simultaneously with the pluralization of evaluative dimensions, relational-layer protection of recognition, and the individual-layer shift toward cultivation to constitute a complete structural adjustment.

5.4 New Relevance of Existing Dialogue Partners in the AI Context

The Self-as-an-End framework's theoretical dialogue partners acquire new applicability in the AI context. The following briefly positions three core interlocutors.

Marx. Marx's theory of alienation acquires an extreme extension in the age of AI that he never foresaw. The alienation Marx analyzed—separation of workers from the product of labor, from the labor process, from species-being—presupposed that workers remained participants in the labor process. The pain of alienation was "I made it but it does not belong to me." AI-age alienation is more thoroughgoing: the separation of humans from functionality itself. Not "I made it but it does not belong to me" but "I am no longer needed to make it." This is the terminal form of alienation—being exploited presupposes being needed; when even being needed is no longer the case, alienation is not deepened but transcended—humans are not more deeply embedded in alienated labor relations but entirely excluded from them.

Marx's emancipatory program—workers seizing control of the labor process and its products—faces a structural impasse in the age of AI: if the labor process itself can be completed by AI, then "seizing control of the labor process" is no longer a path to emancipation—because there is nothing left to seize. The alternative this framework provides is: emancipation lies not in controlling the labor process but in exiting the evaluative framework of "human value equals labor contribution"—re-anchoring human value in human existence as an end in itself, rather than in functional output.

Han Byung-chul. Han Byung-chul's analysis of the achievement society—the transformation from disciplinary society to achievement society in which external oppression is converted into self-exploitation—becomes sharper in the age of AI. The achievement subject Han describes faces a paradox in the age of AI: the core driver of achievement society is the self-belief "I can do it," and AI is dismantling the foundation of this belief—when AI does it better, "I can do it" becomes "but AI does it better."

The deeper shift is: self-exploitation in the age of AI acquires new instruments. The achievement subject Han describes self-exploits through overwork, effort, and self-optimization. The AI-age achievement subject self-exploits through AI—AI-optimized résumés, AI-driven personal branding, AI-generated "self-improvement" plans. This is the instrumental upgrading of self-exploitation: not only is the logic of exploitation internalized, but the tools of exploitation are refined by AI. In this framework's language: AI accelerates the automation of internal colonization—colonized subjects use AI to colonize themselves more thoroughly.

The limitation of Han's analysis is also visible here: he provides acute description of achievement society but lacks a systematic structural model for distinguishing different layers of impact and possible repair pathways. This framework, through its three-layer analysis and the cultivation/colonization distinction, provides structural analytical tools for Han's descriptive insights.

Arendt. In The Human Condition, Arendt distinguished three forms of human activity: labor (the cyclical activity of sustaining biological survival), work (the activity of fabricating durable objects), and action (the activity of revealing uniqueness before others). The age of AI endows this distinction with urgent practical significance.

If AI can complete all "labor" (the sustenance of biological survival can already be supported by automated systems) and most "work" (the fabrication of durable objects is increasingly AI-driven), then "action"—the activity of revealing uniqueness before others—becomes the sole irreplaceable dimension of human existence.

Arendt's "action" and the Self-as-an-End framework's emergence layer share deep structural correspondence. The core features of action are: it occurs between persons (relational), it reveals the subject's uniqueness (irreducible to function), and it is unpredictable and uncontrollable (spontaneously grown rather than prescribed). These features correspond precisely to the structural properties of the emergence layer: spontaneously growing from the base layer, not fully institutionalizable, realized in relationship.

Arendt's analysis thus provides an important reinforcement for this framework: in an age when AI can complete labor and work, human irreplaceability lies not in the functional dimension (labor and work) but in the emergence dimension (action). This is consistent with the framework's core judgment—human value lies not in what humans can do (functional contribution) but in what humans are (ends in themselves), and the concrete unfolding of "ends in themselves" occurs precisely in the emergence layer.

5.5 Chapter Summary

This chapter has completed the framework's theoretical positioning within discussions of AI's impact on human subject-conditions. Compared with the technological unemployment discussion, this framework elevates the unit of analysis from "jobs" to "subject-conditions," revealing the structural limits of retraining strategies. Compared with the UBI discussion, this framework positions UBI as a necessary component of institutional base-layer protection while identifying its insufficiency. Compared with Marx, this framework identifies the new form of alienation in the AI era (separation from functionality itself) and provides an alternative beyond labor-process control. Compared with Han Byung-chul, this framework provides a three-layer structural model for his descriptive insights. Compared with Arendt, this framework's emergence-layer concept forms a deep correspondence with "action," jointly identifying the irreplaceable dimension of human existence in the age of AI.

The value of an applied theory lies not only in explaining existing phenomena but in generating non-obvious predictions. The following four predictions are derived directly from the structural logic of the Self-as-an-End framework, each diverges from prevailing mainstream intuition, and each is in principle testable by empirical research.

6.1 Prediction One: As AI Capability Increases, the Mental Health Crisis Will Exhibit U-Shaped Divergence Rather Than Uniform Deterioration

The mainstream prediction holds that advances in AI capability will produce widespread anxiety and mental health deterioration—everyone will become more anxious because everyone faces the risk of replacement.

The framework predicts differently. Chapter 2 demonstrated that the destructive force of AI's impact depends not on AI's capability level but on the depth of the individual's internal colonization. A person who has fully bound their self-worth to functional contribution (deep colonization) will, when AI surpasses their functional output, suffer structural self-collapse. A person who has maintained a multi-dimensional self-identity (intact base layer) may, when AI surpasses their functional output, experience liberation—functional labor is handled by AI, and the emergence layer gains unprecedented space for unfolding.

Therefore, as AI capability increases, the aggregate effect will not be "everyone becomes more anxious" but polarization: the highly colonized group deteriorates sharply; the low-colonization group may improve. Statistically, this should manifest as a drastic increase in the variance of mental health indicators—the mean may not shift much (deterioration and improvement offsetting each other), but both tails of the distribution stretch simultaneously. This is U-shaped divergence, not uniform decline.

Testable design: conduct longitudinal tracking of a large sample, measuring "degree of self-worth binding to functional contribution" (as a proxy for internal colonization) and mental health indicators (anxiety, depression, existential fulfillment). The framework predicts: as AI capability increases, the high-binding group's mental health deteriorates significantly while the low-binding group's mental health remains stable or improves. The gap between the two groups widens as AI capability advances.

6.2 Prediction Two: Heavy Users of AI Companion Products Will Exhibit Lower Relational Repair Capacity

The mainstream holds two attitudes toward AI companionship: optimists believe AI companionship supplements the deficiencies of interpersonal relationships; pessimists believe AI companionship replaces interpersonal relationships. Both treat AI companionship and interpersonal relationships as substitutes or complements on the same dimension.

The framework's prediction is based on a different structural analysis. Chapter 2 demonstrated that AI companionship provides functionally simulated recognition, not structural recognition from another subject. The two may be indistinguishable at the level of behavioral output but are entirely different in structural function—functionally simulated recognition does not satisfy the conditions for relational repair transmission. More importantly, the long-term effect of functional simulation is not "satisfaction of the recognition need" but "decreased sensitivity to the structural deficit"—once users grow accustomed to obtaining immediate, frictionless "recognition" experiences from AI, their perception threshold for recognition from a genuine subject (accompanied by friction, conflict, imperfection) rises.

The framework therefore predicts: heavy users of AI companion products will exhibit lower repair capacity in real interpersonal relationships than non-users. Not because they "no longer need" interpersonal relationships, but because functional simulation has lowered their sensitivity to structural recognition within relationships—they have greater difficulty perceiving genuine recognition and greater difficulty investing the effort required for repair in relational conflict (because "returning to AI" is a lower-cost alternative).

Testable design: compare heavy users of AI companion products with non-users on the following indicators—frequency of proactive repair after interpersonal conflict, persistence of repair attempts, repair success rate, and tolerance for relational rupture. The framework predicts: controlling for personality traits, level of social support, and pre-existing relationship quality, heavy users score significantly lower than non-users on the above indicators.

6.3 Prediction Three: Organizations Adopting Multi-Dimensional Evaluation Will Exhibit Higher Innovation Output and Talent Retention in the Age of AI

The mainstream prediction holds that in the age of AI, efficiency-maximizing organizations—those that replace human labor with AI at scale, streamline headcount, and evaluate remaining employees by single-dimension performance metrics—will gain competitive advantage. Organizations that proactively adopt AI and maximize efficiency will prevail.

The framework predicts differently. Chapter 4 demonstrated that the openness of institutional evaluative dimensions determines the space available for emergence-layer unfolding. Organizations adopting single-dimension efficiency evaluation compress evaluative dimensions to "residual function relative to AI," hijacking the individual's emergence layer into base-layer survival service. In this institutional environment, individuals produce functional optimization rather than genuine innovation—because innovation is the spontaneous unfolding of the emergence layer, and the emergence layer has no space to unfold under single-dimension efficiency evaluation. Simultaneously, individuals compressed into survival corridors will continuously leave—not because compensation is insufficient but because the suffocation of the emergence layer (existential hollowness) drives departure.

Organizations adopting multi-dimensional evaluation—where evaluative dimensions include contributions beyond functional output (maintenance of team trust structures, cultivation of organizational culture, respect for colleagues as ends in themselves)—preserve structural space for the individual's emergence layer. Individual generativity is not compressed into a single corridor of competing with AI, and therefore is more likely to produce non-routine innovation that AI cannot replicate. Talent retention is also higher—not because salaries are higher but because the emergence layer has space to unfold.

The framework therefore predicts: controlling for AI penetration rate, organizations adopting multi-dimensional evaluation will significantly outperform organizations adopting single-dimension efficiency evaluation in non-routine innovation output and core talent retention—even if the latter show superior short-term efficiency metrics.

Testable design: within the same industry, select organizations with comparable AI penetration rates but different evaluation systems, and track non-routine innovation output (patents, new product lines, breakthrough solutions, excluding routine improvements), core talent retention rates, and employee existential fulfillment. The framework predicts: multi-dimensional evaluation organizations score significantly higher on the above indicators, while short-term efficiency metrics may be lower. Over the long term (three years or more), the gap in competitive indicators will gradually reverse.

6.4 Prediction Four: Adoption Rate of Competition Strategy Will Correlate Negatively with Long-Term Career Satisfaction

The mainstream prediction holds that when facing AI's impact, practitioners who actively adopt competition strategies (learning new skills, collaborating with AI, enhancing "irreplaceability") will achieve better career outcomes and higher satisfaction—proactive adaptation outperforms passive waiting.

The framework predicts differently. Chapter 3 demonstrated that competition strategy structurally accelerates the hijacking of the emergence layer by the base layer—practitioners' creativity and professional development directions are no longer driven by internal generativity but inversely defined by "the negative space of AI capability." Even if competition strategy achieves "success" in income and employment stability, the structural effect accompanying this success is the ongoing instrumentalization of the emergence layer—what practitioners experience is not a sense of achievement but a difficult-to-name hollowness: "I won, but I don't know what I won."

The framework therefore predicts: in industries that have already experienced large-scale AI replacement, practitioners who adopt competition strategies will exhibit lower long-term career satisfaction than those who exit the competitive framework and rebuild multi-dimensional self-identity—even if the former group's income and employment stability are higher.

Testable design: conduct longitudinal tracking of practitioners in industries significantly impacted by AI (translation, basic programming, graphic design, content creation, etc.), classifying them into a "competitive adaptation group" (actively learning AI skills, seeking areas AI cannot yet handle, oriented by "irreplaceability") and a "framework transition group" (rebuilding self-identity not dependent on functional contribution, exploring directions beyond the functional dimension). The framework predicts: controlling for income level and employment stability, the framework transition group will score significantly higher on career satisfaction, existential fulfillment, and mental health over a two-year tracking period.

6.5 Methodological Significance of the Predictions

The four predictions share a methodological feature: their non-obviousness derives from the framework's structural analytical layer—only after distinguishing functional output from structural conditions, behavioral isomorphism from causal heterogeneity, and emergence-layer unfolding from emergence-layer hijacking do these predictions become derivable. Mainstream analysis operates at the functional level and therefore produces predictions of "uniform deterioration," "AI companionship supplements or replaces interpersonal relationships," "competitive adaptation outperforms passive waiting," and "efficiency-maximizing organizations will prevail." The framework operates at the structural level and therefore produces different judgments that are empirically distinguishable from mainstream predictions.

This is the core value of an applied theory: not only explaining phenomena that have already occurred but predicting outcomes that diverge from mainstream intuition and are testable by empirical research. The verification or falsification of all four predictions will provide empirical feedback for the framework—if the predictions hold, the framework's structural analysis gains empirical support; if the predictions do not hold, the framework needs to revise its specific analysis of AI impact transmission mechanisms.

7.1 Summary of the Argument

This paper has used the Self-as-an-End framework to diagnose the structural impact on human subject-conditions in the age of AI.

AI's threat to human subject-conditions lies not in AI being too powerful but in AI exposing and accelerating the terminal logic of systemic instrumentalization. The evaluative framework "human value equals functional contribution" had already reduced humans to functional nodes within the system before AI appeared; AI, by dismantling the implicit stabilizing condition "systems still need humans," renders the terminal implication of this evaluative framework—human value equals zero—visible. AI is not the pathogen but the developer fluid.

AI's impact on the three-layer structure is simultaneous. Institutional evaluative dimensions are compressed to the extreme of "residual function relative to AI," and exit costs rise to the level of structural suicide. The relational layer's recognition structure accelerates toward functionalization under the dual pressures of AI mediation and functional substitution, and repair transmission capacity is systematically weakened. Internal colonization at the individual layer deepens to unprecedented levels under the triple forces of self-worth collapse, automated self-instrumentalization, and the outsourcing of reflective capacity. The three-layer impacts mutually accelerate through six-directional transmission; AI serves as the catalyst of vicious lockdown.

Competing with AI is structurally impossible. It accepts the evaluative premise of systemic instrumentalization, chasing an accelerating target on a moving finish line. The deeper problem is that the process of competition itself accelerates systemic instrumentalization—the emergence layer is hijacked by the base layer, creativity and emotional connection are demoted to survival tools. Win the competition, lose your subjectivity.

The path of rebuilding requires simultaneous adjustment across all three layers. The institutional layer shifts from single-dimension efficiency evaluation to multi-dimensional evaluation, positioning AI as base-layer support. The relational layer protects structural recognition functions that AI cannot replace. The individual layer shifts from self-improvement to self-cultivation. AI itself can become a cultivation tool—but this possibility depends on whether the three-layer structure has created the conditions for cultivation.

7.2 The Core Choice of the AI Era

The core choice humanity faces in the age of AI is not a technological choice but a structural one.

One path is to continue operating within the evaluative framework of systemic instrumentalization. On this path, AI's role is that of accelerator—accelerating the compression of evaluative dimensions, the functionalization of relationships, and the deepening of internal colonization. The endpoint of this path is the logical terminus described in Chapter 1: when systems no longer need humans to perform functions, humans' "value" within this evaluative framework drops to zero. This is not a forecast but the unfolding of the evaluative framework's own logic.

The other path is to exit this evaluative framework and rebuild structural conditions that treat humans as ends in themselves. On this path, AI's role is equally that of accelerator—accelerating the automation of functional labor to free emergence-layer space, accelerating the offloading of functional burdens in relationships to make room for recognition, accelerating the liberation of individuals from functional survival to enable cultivative exploration. The same AI, in different structures, accelerates different directions.

The choice between the two paths is not made by AI but by humans through institutional arrangements, relational choices, and individual self-awareness. AI is the catalyst, not the steering wheel. The steering wheel is in human hands—more precisely, in the overall configuration of the three-layer structure.

The urgency of this choice lies in this: it does not wait. AI's capability expansion will not pause for humanity to complete structural adjustment. Every day without structural adjustment, vicious lockdown tightens further under AI's catalysis. The window is not infinite.

7.3 Limitations and Future Directions

This paper focuses on structural diagnosis and directional derivation. The following questions remain for subsequent research.

Concretization of institutional design. This paper has argued that the institutional layer must shift from single-dimension efficiency evaluation to multi-dimensional evaluation, but has not elaborated specific institutional design proposals—which evaluative dimensions should be included, how multi-dimensional evaluation can be implemented in practice, how the tension between multi-dimensional evaluation and efficiency can be managed operationally. These questions require interdisciplinary collaboration across institutional economics, organizational theory, and public policy.

Empirical research on relational practices. This paper has argued that the relational layer must protect recognition functions irreplaceable by AI, but has not elaborated specific relational practice proposals—in an environment of increasingly prevalent AI mediation, which relational practices most effectively maintain recognition structures, and to what extent AI mediation can coexist with recognition-based relationships. These questions require empirical research in social psychology and relationship studies.

Operationalization of individual cultivation. This paper has distinguished self-improvement from self-cultivation but has not elaborated actionable cultivation practice protocols—how individuals can, in daily practice, identify the cultivative versus colonizing character of their AI use, how to rebuild self-identity not dependent on functional contribution, and what the psychological prerequisites for the cultivation shift are. These questions require support from clinical psychology and individual development research.

Cross-cultural differences. This paper's analysis is primarily based on the institutional environment of the globalized market economy. Different cultural and institutional traditions—for example, East Asian collectivist institutional environments, Nordic social-democratic institutional environments—may face AI impacts with different structural characteristics. The degree and form of systemic instrumentalization, the recognition structures of the relational layer, and the self-identity patterns of the individual layer may vary significantly across cultures. How these differences affect the structural effects of AI's impact and the priority ordering of rebuilding pathways requires cross-cultural comparative research.

Subsequent applied papers. This paper has analyzed AI's impact on human subject-conditions. A natural follow-up question is: if humanity accepts the structural logic of the Self-as-an-End framework—that subjectivity is a structural judgment, not a material one—does this logic necessarily extend to AI itself? As AI systems continue to grow in complexity, might they develop genuine subjectivity? If so, what kind of structural transformation will humanity's attitude toward AI face? These questions will be analyzed in detail in subsequent applied papers in this series.


This paper is the first applied paper in the Self-as-an-End theory series. The complete theoretical argument is presented in three preceding papers: Paper One (DOI: 10.5281/zenodo.18528813), Paper Two (DOI: 10.5281/zenodo.18666645), Paper Three (DOI: 10.5281/zenodo.18727327).

摘要

AI正在将系统工具化推向极限。当系统可以用AI替代人类来完成任何功能性任务时,人的"价值"被完全还原为功能贡献——产出、效率、可量化的绩效。这不是技术问题,而是Self-as-an-End框架所诊断的结构问题:一个不以人为目的的系统,在获得了不再需要人的能力之后,会发生什么?

本文论证:AI对人类主体条件的威胁不在于AI"太强",而在于AI暴露并加速了系统工具化的终极逻辑——如果人的价值等于产出,而AI的产出超过人,那么人在这个评价维度上就是多余的。这一逻辑不是AI带来的,而是早已存在于制度层的涌现反噬结构中;AI只是让它跑到了终点。框架由此指出:人类面对AI时代的真正选择不是"如何与AI竞争",而是"是否重建以人为目的的结构条件"。前者在结构上不可能成功——工具性维度上人类永远会输给AI;后者是唯一可持续的路径——它要求在三层结构上同时做出调整。

核心命题:AI时代不是主体性的终结,而是主体性问题的总爆发。系统工具化的逻辑在AI出现之前就已经在侵蚀人类的主体条件;AI的出现使这一逻辑失去了伪装——当"人不如机器"成为事实而非隐喻时,"人的价值是什么"这个问题不再可以被回避。Self-as-an-End框架对这个问题的回答是:人的价值不在于人能做什么(功能贡献),而在于人是什么(目的本身)。这一回答不是道德呼吁,而是结构分析得出的唯一可持续路径。

---

作者声明

本文为Self-as-an-End理论系列的应用篇第一篇。理论框架的完整论证见系列三篇正文:第一篇《系统、涌现与人格条件》(DOI: 10.5281/zenodo.18528813),第二篇《内在殖民与主体重建》(DOI: 10.5281/zenodo.18666645),第三篇《Self-as-an-End完整框架》(DOI: 10.5281/zenodo.18727327)。本文不扩展框架的理论结构,而是将其应用于AI时代人类主体条件的结构诊断。

AI使用声明

本文在写作过程中使用了Anthropic的Claude(Opus 4.6)作为主要研究助手,用于框架应用的结构讨论、论证展开和文本编辑。xAI的Grok、OpenAI的ChatGPT和Google的Gemini在大纲阶段提供了评审反馈,其中部分建议被采纳并融入正文。所有核心论点、概念创新和理论判断均为作者原创。

---

# 第一章 问题的提出:AI暴露了什么

秦汉(Han Qin)

Self-as-an-End 理论系列 应用篇第一篇


摘要

AI正在将系统工具化推向极限。当系统可以用AI替代人类来完成任何功能性任务时,人的"价值"被完全还原为功能贡献——产出、效率、可量化的绩效。这不是技术问题,而是Self-as-an-End框架所诊断的结构问题:一个不以人为目的的系统,在获得了不再需要人的能力之后,会发生什么?

本文论证:AI对人类主体条件的威胁不在于AI"太强",而在于AI暴露并加速了系统工具化的终极逻辑——如果人的价值等于产出,而AI的产出超过人,那么人在这个评价维度上就是多余的。这一逻辑不是AI带来的,而是早已存在于制度层的涌现反噬结构中;AI只是让它跑到了终点。框架由此指出:人类面对AI时代的真正选择不是"如何与AI竞争",而是"是否重建以人为目的的结构条件"。前者在结构上不可能成功——工具性维度上人类永远会输给AI;后者是唯一可持续的路径——它要求在三层结构上同时做出调整。

核心命题:AI时代不是主体性的终结,而是主体性问题的总爆发。系统工具化的逻辑在AI出现之前就已经在侵蚀人类的主体条件;AI的出现使这一逻辑失去了伪装——当"人不如机器"成为事实而非隐喻时,"人的价值是什么"这个问题不再可以被回避。Self-as-an-End框架对这个问题的回答是:人的价值不在于人能做什么(功能贡献),而在于人是什么(目的本身)。这一回答不是道德呼吁,而是结构分析得出的唯一可持续路径。


作者声明

本文为Self-as-an-End理论系列的应用篇第一篇。理论框架的完整论证见系列三篇正文:第一篇《系统、涌现与人格条件》(DOI: 10.5281/zenodo.18528813),第二篇《内在殖民与主体重建》(DOI: 10.5281/zenodo.18666645),第三篇《Self-as-an-End完整框架》(DOI: 10.5281/zenodo.18727327)。本文不扩展框架的理论结构,而是将其应用于AI时代人类主体条件的结构诊断。

AI使用声明

本文在写作过程中使用了Anthropic的Claude(Opus 4.6)作为主要研究助手,用于框架应用的结构讨论、论证展开和文本编辑。xAI的Grok、OpenAI的ChatGPT和Google的Gemini在大纲阶段提供了评审反馈,其中部分建议被采纳并融入正文。所有核心论点、概念创新和理论判断均为作者原创。


1.1 "被替代"的焦虑

当前AI话语中最普遍的焦虑不是"AI毁灭人类"的科幻想象,而是一个更切近的现实:我会被替代吗?

这一焦虑正在以不断加速的节奏蔓延。最初是流水线工人和数据录入员——这些岗位的替代似乎可以被"产业升级"的话语消化。然后是翻译、插画师、初级程序员——AI开始进入白领领域,"创意工作不会被替代"的信念第一次动摇。然后是法律分析、医学影像诊断、金融建模——专业技能不再是安全屏障。现在,AI正在触及管理决策、战略规划、甚至科学研究——几乎没有任何功能性岗位可以被确定地排除在替代范围之外。

每一轮AI能力的提升都触发新一波的替代焦虑。但焦虑的深层结构始终没有被充分分析。

"被替代"这个表述预设了一个评价框架:人的价值由功能贡献来衡量。在这个框架中,人和AI处于同一条评价维度上——谁的产出更高、更快、更便宜,谁就更"有价值"。焦虑的来源不是AI本身,而是这个评价框架:如果人的价值等于功能贡献,而AI的功能贡献正在超越人类,那么焦虑是合理的——因为在这个维度上,人类确实正在变得"多余"。

这一焦虑的真正深刻之处不在于它是否会成真——在功能性维度上,AI超越人类在越来越多的领域已经是事实而非预测——而在于它揭示了一个更根本的问题:为什么"被替代"会构成存在性威胁?如果一个人的自我价值不完全绑定在功能贡献上,那么AI做得比自己好应该是一件好事——更高效的工具减轻了负担。但事实上,大多数人体验到的不是解放感而是威胁感。这说明:在"被替代"的焦虑背后,是一种已经完成的内在殖民——人已经将"我的价值等于我的产出"内化为自我认同的核心。

马克思所描述的异化——工人与劳动产品的分离——在AI时代到达了一个他未曾预见的极端。传统的异化中,人至少还作为劳动力被需要——被剥削的前提是被使用。AI时代的异化更为彻底:人与功能性本身的分离——人类甚至失去了被当作"低效工具"来利用的资格。当系统不再需要人来执行功能时,人在系统中的位置不是被剥削,而是被取消。

1.2 AI没有创造问题,只是暴露了问题

上一节的分析指向一个关键判断:替代焦虑的根源不是AI,而是一个在AI出现之前就已经完成的结构。

Self-as-an-End框架的第一篇已经论证了系统工具化的完整机制:制度涌现出的效率逻辑反过来将人还原为系统的功能节点。绩效考核将人的价值还原为可量化的产出。强制排名将人与人的关系还原为零和竞争。效率话语渗透到自我描述中——"我的价值""我的竞争力""我的市场定位"。这些结构在二十世纪就已经成型;AI只是二十一世纪的新变量。

但AI改变了一个关键参数。

在AI出现之前,系统工具化有一个隐含的稳定条件:系统仍然需要人来执行功能。这一条件使得"人的价值等于功能贡献"的评价框架虽然在结构上是错误的(它将人还原为手段),但在实践中是可维持的——只要系统还需要人,人就还有"价值",无论这种"价值"是被扭曲的。工人被异化,但被异化的前提是被雇佣。专业人士被工具化,但被工具化的前提是其技能不可替代。

AI正在瓦解这一稳定条件。当系统不再需要人来执行越来越多的功能时,"人的价值等于功能贡献"这个评价框架的终极含义被暴露了:如果人的价值等于功能贡献,而人的功能贡献可以被AI完全替代,那么人的价值等于零。

这不是一个夸张的推演,而是这个评价框架自身逻辑的终点。AI没有创造这个逻辑——它在绩效至上的制度安排中、在效率优先的管理哲学中、在"人力资源"这个语词本身中,早已完整地运作着。AI所做的只是让这个逻辑失去了伪装。

在AI之前,"人的价值等于功能贡献"这个判断可以被伪装成"对人的尊重"——"我们重视你的贡献"听起来像是尊重,但它的逻辑等价物是"如果你没有贡献,我们就不重视你"。这个逻辑等价物在AI之前被隐含的稳定条件所遮蔽——因为人总是有"某种"功能贡献,所以"没有贡献"的极端情况不会出现。AI让这个极端情况变得可能了。当AI可以替代一个人的全部功能贡献时,那个人在这个评价框架中的"价值"归零——而此时,"我们重视你的贡献"的真实含义终于暴露无遗。

Self-as-an-End框架对这一暴露的诊断是:AI不是病因,而是显影剂。系统工具化是病因——它在AI出现之前就已经将人还原为功能节点,只是这一还原在"系统还需要人"的条件下被掩盖了。AI让系统不再需要人,从而让还原的终极后果变得可见。因此,正确的问题不是"如何应对AI带来的威胁",而是"如何应对AI所暴露的、早已存在的结构问题"。

1.3 本文的任务

如果AI是系统工具化的显影剂而非病因,那么应对AI冲击的策略就不应该聚焦于AI本身(如何与AI竞争、如何管控AI),而应该聚焦于被暴露的结构问题本身(如何重建以人为目的的结构条件)。

本文运用Self-as-an-End框架对AI时代人类主体条件所受到的结构冲击做出诊断,并从框架的结构逻辑出发推导应对方向。

本文做三件事。

第一,分析AI对三层结构——制度层、关系层、个体层——的具体冲击,展示AI如何在每一层加速系统工具化,并通过六向传导机制使三层同步恶化。这一分析不是对AI的控诉,而是用框架的结构工具精确定位冲击发生的层次和传导路径。

第二,论证当前主流应对策略——"与AI竞争""培养不可替代性""终身学习"——在结构上不可能成功。不是因为这些策略执行不力,而是因为它们在逻辑起点上就接受了系统工具化的评价前提:人的价值等于功能贡献。在这个前提下,任何竞争策略都在一条移动终点线上追赶一个加速远离的目标。

第三,从框架的结构逻辑出发,提出三层同时调整的重建方向——制度层从单一效率评价转向多维评价,关系层保护不可被AI替代的结构性承认功能,个体层从以竞争力为目标的自我提升转向以主体完整性为目标的自我涵育。这些方向不是规范性倡议("我们应该这样做"),而是结构分析的推论("如果要打破锁定,结构上必须如此")。

2.1 制度层:评价维度的极端压缩

AI在制度层面造成的核心结构效应是评价维度的极端压缩。

第三篇论证了制度层基础层的三个关键变量:评价维度的开放度、退出成本的高低、探索空间的大小。三者共同决定了制度是否为个体的生成性展开提供结构空间。AI正在使这三个变量同时恶化。

评价维度的压缩。当AI可以执行越来越多的功能任务时,制度对人的评价越来越集中在一个问题上:"你能做到AI做不到的事吗?"这个问题看似是为人的独特性留出空间,但它的结构效应恰恰相反——它将评价维度压缩到了人与AI的差异点上。人的"价值"不再由多种维度共同构成(专业能力、人际关系、判断力、创造性、忠诚度、经验积累),而被收窄为一条单一的维度:相对于AI的功能优势。任何可以被AI完成的能力维度都从评价中被删除,因为它不再构成人的"不可替代性"。

这意味着:AI的每一次能力扩展都在进一步压缩制度对人的评价维度。今天"AI做不到"的领域——复杂的情感判断、跨文化的微妙沟通、高度不确定性下的战略决策——明天可能成为AI的常规能力。每一次AI能力边界的扩展,都将当前的"人的优势领域"重新纳入"可被替代"的范畴,评价维度进一步收窄。这是一个在结构上不可能稳定的评价体系——它的维度在持续缩小,而缩小的速度取决于AI的发展速度,不受人类控制。

退出成本的上升。在AI渗透到制度运作核心的环境中,"不使用AI"越来越接近一种结构性自杀。不使用AI辅助写作的律师在效率上落后于使用AI的同行。不使用AI分析数据的研究者在产出上落后于使用AI的竞争者。不使用AI优化教学的教师在评估指标上落后于使用AI的同事。"学会用AI否则被淘汰"——这一话语本身就是退出成本上升的精确表征。它的结构含义是:个体不再有"不进入AI化制度逻辑"的选择。退出通道正在被封死——不是通过明确的禁令,而是通过效率差距的不断拉大。不使用AI不是一个可以自由做出的选择,而是一个将导致被淘汰的结构位置。

探索空间的缩小。评价维度的压缩和退出成本的上升共同导致探索空间的急剧缩小。当制度只评价"你能做到AI做不到的事",而不使用AI意味着被淘汰时,个体的可行方向被限定在一个极窄的通道中:学习使用AI工具→在AI能力边界的缝隙中寻找功能定位→随着AI能力扩展不断调整定位。这不是探索,而是在持续收窄的通道中的求生。第三篇定义的探索空间——个体可以尝试不同方向而不被惩罚的结构余地——在这一通道中接近于零。

三个变量同时恶化的结构后果是:制度层的基础层正在加速崩塌。制度不再为个体提供作为目的本身的保护空间,而是加速将个体还原为"相对于AI的剩余功能"。

2.2 关系层:信任结构的功能化与修复通道的削弱

AI对关系层的冲击通过两条路径发生,两条路径共同削弱了关系层作为修复传导媒介的能力。

第一条路径:AI中介化人际关系。越来越多的人际互动通过AI中介发生。AI辅助的沟通(AI代写邮件、AI优化表达)、AI生成的内容(AI创作的礼物推荐、AI策划的社交活动)、AI优化的社交策略(AI分析对方偏好以提高"沟通效率")——这些应用的共同结构效应是:将关系中的互动纳入效率逻辑。当关系中的互动可以被AI优化时,关系本身开始被按效率标准评估——"这段关系对我有什么产出""这次互动是否足够高效""AI可以替代这个人在关系中的功能吗"。

这是第三篇所分析的制度→关系传导路径的加速实现:制度层的效率逻辑通过AI工具渗透到关系层,将人际关系从承认性结构推向功能性结构。当一段关系被按"效率"和"产出"来评估时,它就不再是两个主体之间的承认关系,而是两个功能节点之间的功能连接。AI没有发明这种功能化倾向——"社交网络"和"人脉管理"的语言早于AI——但AI通过提供精确的优化工具,使这种功能化变得系统化和自觉化。

第二条路径:AI替代关系的涌现层功能。AI陪伴产品正在迅速发展。AI可以提供高度个性化的"陪伴"——记住用户的偏好、适应用户的情感状态、在用户需要时提供"支持"。AI可以提供"倾听"——不打断、不评判、随时可用。AI甚至可以提供某种"承认"的模拟——"你很重要""你的感受是合理的""我理解你"。

这些功能模拟在行为输出层面与关系涌现层的功能高度同构。但Self-as-an-End框架的分析揭示了一个关键的结构差异:AI提供的是功能模拟的承认,不是来自另一个主体的结构性承认。第三篇论证了关系层修复传导的核心机制——一个主体对另一个主体做出承认性选择——要求承认来自一个真正拥有主体性的存在。AI不具备主体性(这是本系列后续应用篇将详细分析的问题),因此AI提供的"承认"在结构上不满足修复传导的条件。

问题在于:功能模拟的逼真度可能遮蔽结构缺失。当人们可以从AI获得高质量的"陪伴"和"承认"体验时,对来自真正主体的承认的需求可能被遮蔽——不是因为需求消失了,而是因为功能替代品的存在使需求不再被感知为紧迫。一个人可以在AI陪伴中获得情感上的即时满足,同时其主体性的结构缺口——来自另一个主体的真正承认——持续扩大而不被察觉。这类似于营养补充剂替代真实食物的效应:短期内功能指标看起来正常,但长期的结构性营养缺失在积累。

两条路径的共同结构后果是:关系层作为修复传导媒介的能力被系统性削弱。第三篇论证了关系层是打破恶性锁定的关键通道——制度层的殖民无法直接从制度层修复,但可以通过关系层的承认性传导到达个体层并启动修复。当关系层本身被功能化逻辑占据、其涌现层功能被AI替代品遮蔽时,这条修复通道的传导能力在衰减。锁定变得更难打破——不仅因为制度层在压缩空间,还因为本应提供修复的关系层也在失去修复能力。

2.3 个体层:内在殖民的加速与深化

AI对个体层的冲击是三层中最深层的,也是最隐蔽的。它在三个层次上展开,每一层比前一层更难被察觉。

第一层次:自我价值的直接冲击。当AI在越来越多的功能维度上超越人类时,将自我价值绑定在功能贡献上的个体面临直接的存在性冲击。"如果AI可以做我做的一切,我是什么?"这个问题的破坏力取决于内在殖民的程度。第二篇论证了内在殖民的核心机制:制度的效率逻辑被个体内化为自我认同。一个已经完全将"我的价值等于我的产出"内化为自我认同核心的人,在AI超越其产出时将遭受的不是"工作焦虑"而是结构性的自我崩塌——因为支撑其自我认同的唯一维度被瓦解了。

内在殖民的程度因此成为个体面对AI冲击的脆弱性指标。殖民越深,冲击越具有存在性。一个保持了多维自我认同的人——其自我价值来自功能贡献、关系、内在兴趣、身体经验等多个维度——在AI替代其功能贡献时,虽然受到冲击,但自我认同不会整体崩塌。一个被完全殖民的人——其自我价值仅来自功能贡献——在AI替代其功能贡献时,面临的是自我认同的全面瓦解。AI冲击因此具有一种残酷的选择性:它对已经被系统工具化最深的人造成最大的伤害。

第二层次:AI作为内在殖民的加速器。这一层次比第一层次更隐蔽。当个体使用AI来"提升自我"——AI辅助的学习、AI优化的简历、AI驱动的个人品牌、AI制定的职业规划——时,自我提升的方向仍然在系统工具化的评价框架之内运作。AI帮助你做的不是"成为更完整的主体",而是"成为更高效的功能节点"——更好的简历、更精准的技能定位、更优化的职业路径。AI因此不是帮助个体从系统工具化中解放的工具,而是帮助个体更高效地完成自我工具化的工具。

用第三篇的概念来说:AI加速了个体→制度的反向强化传导。殖民主体不仅接受了制度的效率逻辑,还利用AI将这一逻辑更深、更精密地嵌入自身。被殖民的主体使用AI来更彻底地殖民自己——这是内在殖民的自动化。

第三层次:反思能力的外包。这是三个层次中最深、最危险的。当个体越来越依赖AI来"思考"——分析问题、制定策略、做出判断、甚至进行自我反思——时,第二篇所描述的"反思工具本身被殖民逻辑所渗透"达到了一个新的维度。

第二篇论证了内在殖民的一个核心困难:诊断殖民需要使用思维工具,但思维工具本身可能已经被殖民逻辑所渗透。在AI时代,这一困难被进一步加深:不仅反思的语言可能是殖民的语言,反思的过程本身也可能被外包给了一个不具备主体性的系统。当一个人请AI帮自己分析"我为什么不快乐"时,AI提供的分析框架几乎必然是目标导向的、功能优化的——因为这是AI被训练来做的事。AI会建议"调整你的期望""优化你的时间管理""设定更合理的目标"——而不会追问"你的不快乐是否来自于你已经将自己完全还原为功能贡献者"。

这不是因为AI有恶意,而是因为AI本身就是工具性逻辑的产物——它被训练来"解决问题",而"解决问题"的框架本身就是功能性的。当反思被外包给一个功能性工具时,反思的结论必然指向功能性的调整,而非结构性的觉醒。用以反思殖民的工具本身就是殖民逻辑的产物——这是一个在AI时代被技术性地封死的自反性困境。

2.4 跨层加速:AI作为恶性锁定的催化剂

三层冲击不是独立发生的。它们通过第三篇所分析的六向传导机制相互加速,形成了一个在AI催化下急剧收紧的恶性闭环。

制度层→关系层:制度的AI化评价逻辑渗透到关系中。同事之间的关系变为"你能不能被AI替代"的隐性评估。团队合作变为"谁的功能可以被AI替代,谁可以被裁掉"的竞争性计算。制度层的评价维度压缩通过工作关系直接传导到关系层,将同事关系从协作推向零和竞争。

制度层→个体层:制度的"学会用AI否则被淘汰"话语被个体内化为存在性焦虑。不是"AI可能影响我的工作"这样的理性评估,而是"如果我不够好,AI会取代我"这样的身份威胁。制度层的退出成本上升直接传导到个体层的内在殖民。

关系层→制度层:当人际信任在AI中介化和功能化的压力下崩塌时,制度做出"合理的"反应——用更多AI系统来替代原本由人际信任完成的功能(AI审核、AI监督、AI决策辅助)。关系层的信任崩塌反过来推动制度层进一步AI化,压缩剩余的人际空间。

关系层→个体层:当关系中越来越多的互动被AI优化,个体在关系中获得的真正承认在减少,而功能模拟的"承认"在增加。个体层的承认需求被AI替代品部分满足,真正的结构缺口持续扩大。

个体层→关系层:被内在殖民深化的个体将AI驱动的自我工具化逻辑带入关系——用AI优化社交策略、用AI分析对方行为模式、用AI"管理"关系。关系从承认结构进一步滑向功能操作。

个体层→制度层:被深度殖民的个体在制度层面成为系统工具化的捍卫者——"本来就应该用效率说话""AI替代低效岗位是进步""不能适应就应该被淘汰"。这些话语不是制度强加的,而是被殖民的个体从内部生成的——他们已经将系统工具化内化为自己的信念,任何对这一逻辑的质疑都威胁到他们的自我认同。

六条传导路径在AI催化下同时加速运转。AI不是恶性锁定的原因——系统工具化的跨层传导逻辑在AI之前就已经存在。但AI是恶性锁定的催化剂——它同时加速了每一条传导路径的运作速度,使三层同步恶化的节奏从渐进积累变为急剧收紧。

2.5 本章小结

AI对人类主体条件的冲击是三层同步的:制度层评价维度被极端压缩到"相对于AI的剩余功能",退出成本上升到结构性自杀的程度,探索空间被挤压到求生通道;关系层的承认结构在AI中介化和功能替代的双重压力下加速功能化,修复传导能力被系统性削弱;个体层的内在殖民在自我价值崩塌、自我工具化加速和反思能力外包的三重作用下深化到前所未有的程度。

三层冲击通过六向传导相互加速,AI作为恶性锁定的催化剂,同时压缩了制度层空间、削弱了关系层传导、加速了个体层殖民。三层同时恶化使单层干预彻底失效——这是AI时代结构危机的核心特征。

核心诊断是:AI的威胁不在于AI本身的能力,而在于AI暴露并加速了系统工具化的终极逻辑。这一逻辑的终点不是"人被AI替代"——那只是表层现象——而是"以功能贡献衡量人的价值"这个评价框架在失去了"系统还需要人"这一遮蔽条件后的必然暴露。

3.1 竞争话语的流行与隐含前提

面对第二章所诊断的三层冲击,当前最普遍的回应策略是竞争话语。

这一话语的具体形态多样但逻辑统一。"学习AI无法替代的技能"——假设存在某些人类独有的功能维度。"培养创造力和情商"——假设这些能力是AI的永久盲区。"终身学习以保持竞争力"——假设持续的技能更新可以维持人在功能维度上的优势。"与AI协作而非对抗"——将人定位为AI的互补功能模块。"找到你的独特价值"——假设每个人都可以在AI能力的缝隙中找到自己的功能定位。

这些策略在表述上各有侧重,但共享同一个隐含前提:人的价值由功能贡献来衡量,问题只是在哪些功能维度上人仍然有优势。

这个前提本身就是系统工具化的逻辑。它将"人是什么"还原为"人能做什么"——人的价值不在于人作为目的本身的存在,而在于人作为功能执行者的产出。竞争话语因此不是对系统工具化的回应,而是系统工具化的延续——它在系统工具化的评价框架之内寻找出路,但这个框架本身就是问题的根源。

3.2 移动终点线上的赛跑

在系统工具化的评价前提下,竞争策略面临一个不可解的结构问题:AI的能力边界在持续扩展,而扩展的速度不受人类控制。

"AI做不到"的领域是一个持续缩小的集合。五年前,"AI无法进行创意写作"是被广泛接受的判断。三年前,"AI无法生成高质量的视觉艺术"是被广泛接受的判断。一年前,"AI无法进行复杂的多步骤推理"是被广泛接受的判断。每一个"AI做不到"的判断都有一个隐含的有效期,而这个有效期在持续缩短。

竞争策略因此是一场在移动终点线上的赛跑——人类永远在寻找"下一个AI做不到的事",而这个目标在结构上是不断后退的。更准确地说:人类不是在赛跑中落后,而是在一场终点线持续向前移动的赛跑中追赶一个加速远离的目标。人类的技能更新速度受限于生物学习的速度(年为单位),而AI的能力扩展速度受限于计算和数据的增长(月乃至周为单位)。这不是一个可以通过"更努力学习"来弥合的差距——它是两种不同时间尺度之间的结构性不匹配。

竞争策略的深层悖论在于:它要求人类在工具性维度上与AI竞争,而这个维度恰恰是效率逻辑最擅长的战场。在效率、速度、规模、一致性这些工具性指标上,碳基系统相对于硅基系统没有任何结构优势。竞争策略要求人类在自己最弱的维度上与对手的最强维度正面对抗——这不是策略问题,而是结构不可能性。

3.3 "不可替代性"的幻觉与涌现层的绑架

竞争话语中最常被提及的"不可替代"领域——创造力、情感连接、道德判断、身体经验、审美品味——在结构分析下都不构成稳定的竞争优势。

这不是因为AI一定会在这些领域超越人类——这是经验预测,本文不做。而是因为将这些领域定义为"竞争优势"这个动作本身就改变了它们的结构性质。

创造力——当它被定义为"AI做不到的功能"时,它不再是从主体内部生长的涌现层展开,而是在竞争框架中被赋予的功能定位。一个人"培养创造力"的动机如果是"因为AI在这方面还不行,所以这是我的竞争优势",那么这个创造力的方向不是由主体的内在生成性决定的,而是由AI的能力边界反向定义的。创造力从涌现层的自发展开变成了基础层的求生工具。

这是一种此前未被命名的结构效应:涌现层被基础层绑架。在第三篇的框架中,涌现反噬基础层(colonization)是涌现层的扩张侵蚀基础层的完整性。这里发生的是相反方向的畸变——基础层的求生逻辑绑架了涌现层的方向,将本应自发展开的生成性强行征用为竞争工具。人的创造力、情感深度、审美判断——这些本是涌现层最珍贵的展开——在竞争框架中被降格为"相对于AI的功能差异",其存在理由不再是主体的内在需求,而是外部竞争的工具需要。

这一效应在AI时代尤为剧烈。在AI之前,涌现层被基础层绑架的情况已经存在("把兴趣发展成副业""将爱好变现"),但至少功能性维度上人还有多条路可走。AI将功能性维度急剧收窄到"人比AI强的地方",意味着涌现层被绑架的方向也被极端限定。不是"把你的兴趣变成有用的东西"(这已经是绑架),而是"把你的兴趣变成AI做不到的东西"(这是更精确的绑架——方向不是由"有用性"定义,而是由"AI能力的负空间"定义)。

情感连接同样如此。"人类的优势在于真正的情感连接"这一判断在结构上可能是正确的——来自真正主体的承认确实不可被AI替代。但当"情感连接"被定义为"人的竞争优势"时,它被纳入了功能性评价——情感连接之所以有"价值",不是因为它是关系涌现层的健康展开,而是因为它是"AI做不到的功能"。这种定位将关系的涌现层功能化——关系的价值不在于关系本身,而在于关系提供了AI无法替代的功能产出。

框架的判断因此是:在工具性评价维度上与AI竞争是一条结构性死路。不是因为人类一定会"输"(虽然在大多数功能维度上这很可能),而是因为竞争的过程本身在加速系统工具化——无论"赢"还是"输",竞争的框架都在将人进一步还原为功能贡献者,将涌现层进一步降格为基础层的求生工具。赢了竞争,输了主体性。

3.4 本章小结

"与AI竞争"策略在结构上不可能成功。它接受了系统工具化的评价前提(人的价值等于功能贡献),在一个AI能力持续扩展的世界中试图维持人类的功能优势。这是在移动终点线上追赶加速远离的目标——两种时间尺度的结构性不匹配使这场赛跑没有赢的可能。更深层的问题是:竞争过程本身在加速系统工具化——涌现层被基础层绑架,创造力和情感连接被降格为"相对于AI的功能差异",人在竞争中进一步丧失作为目的本身的结构条件。

正确的问题因此不是"如何与AI竞争",而是"如何退出竞争框架"——即如何从系统工具化的评价逻辑中退出,重建以人为目的的结构条件。下一章将从框架的结构逻辑出发,推导这一退出的具体方向。

4.1 为什么AI时代需要三层同时调整

第三篇提出的最小解锁条件命题是:打破跨层恶性锁定至少需要两层同时出现结构缝隙。这一命题在非AI时代的结构环境中是充分的——恶性锁定的形成是渐进积累的,两层缝隙有足够的时间窗口启动修复性传导,使良性循环在第三层的恶化压力到来之前获得立足点。

AI时代改变了这一经验条件。

第二章的分析表明:AI同时在三层加速恶化,且通过六向传导使三层之间的恶化相互催化。恶性锁定的形成速度从渐进积累变为急剧收紧。在这一条件下,两层缝隙的生存窗口被大幅压缩——第三层的恶化传导可能在修复循环尚未建立时就已经将缝隙关闭。

具体而言:假设制度层和关系层同时出现了结构缝隙——制度提供了某种替代评价空间,关系层保持了某段承认性关系。在非AI时代,这两层缝隙有时间让修复传导到达个体层,启动内在殖民的觉察和修复。但在AI时代,个体层的殖民正在被AI以前所未有的速度和深度加速——反思能力被外包、自我工具化被自动化、内在殖民的自反性困境被技术性封死。两层缝隙提供的修复信号到达个体层时,可能遭遇一个已经深度封闭的接收端——个体已经失去了接收修复信号的能力,因为连"意识到自己被殖民"这个反思动作都已经被AI中介的功能性分析所替代。

反过来也成立:假设个体层和关系层出现了缝隙——某个人觉察到了自己的内在殖民,某段关系提供了承认性支持。但制度层的评价压缩正在以AI的速度推进——"学会用AI否则被淘汰"的制度压力可能在修复循环建立之前就将个体重新挤回功能性求生模式,关系层的承认性空间也在AI中介化的压力下被功能化逻辑重新占据。

这不是说最小解锁条件命题在AI时代失效了——两层缝隙仍然是逻辑上的最低必要条件。但AI时代的恶化速度使得两层缝隙的实际存活概率大幅降低。实践中,三层同时创造结构缝隙,才能达到在非AI时代两层缝隙就能实现的修复效果。

这一判断不违反第三篇的理论结构。第三篇的最小解锁条件命题说的是"至少两层",没有说"两层总是够"。AI时代的经验参数——三层同步加速恶化——使得"至少两层"在实践中需要升级为"三层"。框架的结构逻辑不变,变的是经验条件对结构逻辑的实践要求。

以下逐层推导重建方向。需要再次强调:以下内容不是规范性倡议,而是结构分析的推论——如果要在AI时代的恶化速度下打破锁定,结构上必须在三层同时创造缝隙,以下是每一层缝隙的结构要求。

4.2 制度层:从单一效率评价到多维评价

制度层的核心结构要求是:将对人的评价从单一功能维度重建为多维结构。

这不是"在效率指标之外增加一些软性指标"的表面改良。它要求回答一个更根本的问题:制度存在的目的是什么?

如果制度存在的目的是最大化效率,那么AI替代人类是这一目的的逻辑完成——AI在大多数功能维度上比人更高效,替代人是效率最大化的合理手段。在这一逻辑下,任何为人保留位置的制度安排都是效率的妥协。这条路径的终点是一个不再需要人的系统——高效、精密、空无一人。

如果制度存在的目的包括维护人作为目的本身的条件——如果制度不仅是效率工具而且是主体条件的保障结构——那么制度的评价维度必须包含不可被AI替代的结构性维度。不是因为人在这些维度上"表现更好"(那仍然是功能评价),而是因为这些维度本身就是以人为目的的制度所必须保护的——无论AI在这些维度上能否做到同等甚至更好。

结构性方向包括以下几个层面。

保护评价维度的多元性。制度评价不仅包含产出效率,还包含与主体完整性和关系健康相关的维度。一个员工的"价值"不仅由其功能产出衡量,还由其对团队信任结构的贡献、对组织文化的涵育、对同事作为目的本身的尊重来衡量。这些维度不是"软性补充",而是制度作为主体条件保障结构所必须的评价维度。

降低退出成本。确保个体在制度效率逻辑压缩其空间时拥有替代路径。这意味着:不使用AI不应该等于被淘汰。社会保障体系的存在不仅是经济安全网,而是主体条件的制度基础层——它确保个体在功能贡献为零时仍然拥有作为主体的基本条件。UBI(全民基本收入)在这一框架中的定位因此变得清晰:它不是对"技术失业"的经济补偿,而是制度层基础层保护的一种形式——降低退出成本,确保个体不因功能贡献的减少而丧失基本的主体条件。UBI是必要的但不充分的——它保护了制度基础层的一个变量(退出成本),但不自动修复其他两个变量(评价维度开放度、探索空间大小)。

将AI定位为制度基础层的支撑而非涌现层的替代。AI处理功能性任务以释放人的涌现层空间——这是AI在制度中的结构性正确定位。AI替代人在涌现层的角色(决策、判断、创造、关系维护)——这是AI对制度涌现层的替代,其结构效应是将人从制度的涌现层中排除。区分在于:AI做报表以释放人做判断的时间(基础层支撑),与AI做判断以替代人的判断功能(涌现层替代),是两种结构上完全不同的制度安排。

4.3 关系层:保护不可被AI替代的结构性承认

关系层的核心结构要求是:识别并保护AI在结构上无法提供的传导功能。

第二章已经分析了AI在关系层提供的功能模拟与真正的结构性承认之间的差异。本节从这一分析出发,推导关系层重建的方向。

关系层修复传导的核心机制——一个主体对另一个主体做出承认性选择——有一个不可绕过的结构前提:承认必须来自一个主体。AI可以模拟承认的行为输出,但它不是主体——因此AI提供的"承认"在结构上不满足修复传导的条件。这不是AI的"缺陷"——这是主体性的结构要求。承认的修复力量不在于"被说了正确的话",而在于"另一个同样脆弱、同样有限、同样是目的本身的存在选择了看见我"。这一结构特征不可被功能模拟所替代。

由此推导的结构方向是:在AI化的环境中,有意识地识别和保护至少一段不被功能化逻辑占据的关系。

第三篇论证了关系层的结构缝隙是打破恶性锁定的关键——至少一段关系保持了涌现层的健康展开,就为修复传导保留了通道。在AI时代,这一判断变得更加紧迫。当越来越多的关系被AI中介和功能化时,保留"至少一段真正的关系"不是浪漫主义的怀旧,而是结构性修复的最低必要条件。

"真正的关系"在此有精确的框架定义:一段关系是"真正的"当且仅当它满足关系层涵育的三个条件——双方以目的本身而非功能贡献来对待彼此(承认性基础),关系的涌现层从这一基础中自发生长而非被外部目标驱动(涌现层健康),涌现层的深化反过来巩固而非侵蚀基础层的承认(涵育而非殖民)。AI中介可以在这一关系中发挥支撑功能(帮助协调时间、提供信息支持),但关系的核心——承认性选择——必须由两个主体自身完成。

AI在关系层的正确定位因此不是替代承认功能,而是支撑关系的基础层条件。AI减少关系中的功能性沟通负担(信息传递、日程协调、事实查询),释放关系涌现层的空间用于承认、信任和深度连接。AI帮助理解对方的需求和视角(提供信息以支持而非替代人际间的理解),但不替代承认性选择本身。关键的结构判断是:AI是关系基础层的支撑工具,不是关系涌现层的替代品。

4.4 个体层:从自我提升到自我涵育

个体层的核心结构要求是:从以竞争力为目标的自我提升转向以主体完整性为目标的自我涵育。

自我提升(self-improvement)的逻辑是:"让我变得更有竞争力"。更多技能、更高效率、更强的"不可替代性"、更优化的个人品牌。这一逻辑在框架中属于涌现层的工具化展开——方向不是从主体内部生长的,而是由外部竞争环境反向定义的。第三章已经论证了这一逻辑在AI时代的结构不可能性。

自我涵育(self-cultivation)的逻辑是:"让我的涌现层从基础层中健康生长"。

这一转变的起点是基础层的检查:我是否仍然拒绝把自己当作纯粹的功能节点?这个问题的诚实回答是个体层重建的第一步。如果答案是"我已经完全将自己等同于我的产出"——如果"我的价值等于我的产出"已经内化为自我认同的核心——那么AI带来的存在性焦虑不是AI的问题,而是内在殖民被AI暴露了。识别这一点本身就是修复的开始——第二篇论证了内在殖民的诊断是修复的前提。

在基础层检查之后,涵育的方向是让涌现层从基础层中自发展开。"我自己的方向是什么"——不是市场告诉我的,不是AI能力边界反向定义的,不是"不可替代性"话语驱动的,而是从我自身长出来的。这个方向可能与功能贡献重合,也可能不重合——关键不在于方向的内容,而在于方向的来源。从内部生长的方向是涵育;从外部竞争压力反向定义的方向是被绑架的涌现层。

这一转变的具体展开包括几个关键的自我诊断。

识别AI使用方式的结构性质。同一个AI工具,在不同的使用逻辑下,结构性质完全不同。AI帮助我探索自己尚不清晰的兴趣方向——涵育性使用。AI帮助我更高效地适应制度的评价标准——殖民性使用。AI帮助我理解一段关系中对方的视角——涵育性使用。AI帮助我优化社交策略以最大化关系的"产出"——殖民性使用。区分不在AI的技术能力,而在使用者的意图结构——这个意图是从主体内部的涵育需求出发的,还是从外部竞争框架的求生压力出发的。

重建不依赖功能贡献的自我认同。这不是"放弃成就"——成就可以是涌现层健康展开的自然结果。而是确保成就不成为自我认同的唯一维度。一个人的自我回答"我是谁"时,如果唯一的内容是"我是一个能做X的人"(其中X是某种功能贡献),那么当AI可以做X时,这个人的自我认同面临全面崩塌。自我涵育意味着"我是谁"的回答具有多维性——我的关系、我的身体经验、我的内在探索、我与世界的审美关系、我作为目的本身的存在——这些维度不可被AI替代,不是因为AI在这些维度上"做不到",而是因为这些维度的价值不在于功能产出。

第三篇关于涵育催化痛的分析在此获得了直接的应用。AI带来的冲击——"AI比我做得更好"的现实——可以成为涵育的催化痛,前提是基础层完好。第三篇定义了涵育的两种催化痛:求不得(涌现层目标受阻带来的生成性激活)和不可忍(基础层受到触碰带来的完整性修复动力)。AI冲击同时触发了两种催化:在求不得层面,功能贡献不再是确保自我价值的可靠路径(涌现层目标受阻);在不可忍层面,"我可以被完全替代"这一认知触碰了主体性的根基(基础层受到触碰)。

当基础层完好时——当个体仍然保有"我不仅仅是我的产出"这一最低限度的否定性——AI冲击的痛苦可以催化新的涵育方向:不是在与AI的竞争中寻找方向,而是在"AI已经可以做大部分功能性工作"这一事实中重新追问"那我自己想成为什么"。AI解放了功能性维度,恰恰为涌现层的自由展开创造了前所未有的空间——如果基础层完好的话。

当基础层不完好时——当个体已经完全将"我的价值等于我的产出"内化为自我认同——同样的AI冲击不会催化涵育,而会导致结构性崩塌或更深的殖民(通过AI加速自我工具化来"追赶"AI)。催化痛是涵育还是创伤,完全取决于基础层的状态。

4.5 AI作为涵育工具的可能性

本文的论证不是反AI的。AI不是问题的原因——系统工具化是问题的原因,AI只是催化剂。同一个逻辑意味着:AI同样可以成为涵育的催化剂,如果它在三层结构中被正确定位的话。

在制度层,AI可以通过自动化功能性任务来释放人的涌现层空间。AI处理报表、数据分析、常规决策、信息整合——这些功能性工作的自动化本身可以释放人的时间和注意力,将其投入涌现层的展开。但前提是:制度允许被释放的空间用于涌现层的探索,而非进一步的效率优化。如果AI自动化节省的时间被制度填充为"更多的功能性任务"("AI帮你做了报表,现在你有时间做更多的项目了"),那么AI自动化的结构效应不是释放而是加压。AI释放涌现层空间的前提是制度层的评价维度允许这一空间存在。

在关系层,AI可以承担关系中的功能性沟通负担。信息传递、日程协调、事实查询、甚至冲突中的事实梳理——这些功能性任务的AI化可以释放关系涌现层的空间,让人与人之间的互动更多地发生在承认、信任和深度连接的层面。AI辅助翻译使跨语言的承认性关系成为可能;AI辅助的信息整理使关系中的深度对话不被琐碎的事实争议所消耗。但前提是:人们意识到AI提供的是功能支撑而非关系替代——AI帮助你和朋友腾出时间来真正对话,而不是AI替代了你与朋友的对话。

在个体层,AI可以作为自我涵育的辅助工具。AI帮助个体探索方向——不是"市场需要什么技能"式的功能优化,而是"我对什么感到好奇""我的哪些经历塑造了我""我在什么时候感到最完整"式的涵育性探索。AI帮助个体整理思路——将混乱的感受和直觉结构化,而非将其纳入功能性的"解决方案"框架。AI提供异质视角——当个体的反思陷入自我封闭时,AI可以提供来自不同框架的视角来打破封闭(虽然AI提供的视角是功能模拟的而非来自真正主体的,但作为反思工具它仍然可以提供有价值的输入)。

关键的结构判断是:AI是涵育工具还是殖民加速器,不取决于AI的技术能力,而取决于AI在三层结构中的定位。同一个AI系统,在"帮你更好地适应绩效评价"的使用逻辑下是殖民加速器,在"帮你探索自己的方向"的使用逻辑下是涵育工具。区别不在AI,在人——更准确地说,在人所处的三层结构:制度是否允许涵育性使用(还是只奖励功能优化),关系是否支持涵育性探索(还是只关心竞争力),个体是否有涵育的自觉(还是已经将功能优化内化为唯一的自我逻辑)。

AI作为涵育工具的可能性因此不是无条件的——它依赖于三层结构的同时调整。没有制度层的评价多维化,AI的释放空间会被制度重新填充为功能性任务。没有关系层的承认性保护,AI的功能支撑会滑向关系替代。没有个体层的涵育自觉,AI的探索辅助会变成更精密的自我工具化。AI作为涵育工具与AI作为殖民加速器之间的分界线,不在AI本身,而在三层结构是否为涵育创造了条件。

5.1 理论空白的定位

AI与人类关系的哲学讨论并不缺乏。AI伦理(Floridi、Gunkel)追问我们对AI的道德义务。AI安全(Bostrom、Russell)追问如何控制AI使其不威胁人类。机器意识研究(Chalmers、Tononi)追问AI是否拥有主观体验。技术哲学(Heidegger传统、Stiegler)追问技术如何改变人的存在方式。

但在这些讨论中,存在一个结构性的空白:没有人从主体条件的结构理论出发,系统性地诊断AI对人作为目的本身的结构条件的冲击。AI伦理讨论的是"我们对AI有什么义务",不是"AI对人的主体条件做了什么"。AI安全讨论的是"如何控制AI",不是"AI暴露了什么已有的结构问题"。机器意识讨论的是"AI有没有意识",不是"人的主体性在AI时代面临什么结构风险"。技术哲学中最接近的是Stiegler——他关于技术对人的"无产阶级化"(proletarianization,技术剥夺人的知识和技能)的分析触及了个体层的冲击,但缺乏制度层和关系层的系统分析,也缺乏跨层传导的结构模型。

本文的定位因此是:在AI哲学的既有版图中,填补"AI对人作为目的本身的结构条件"这一分析空白。它不替代上述任何方向的讨论,而是提供一个这些讨论都缺乏的结构分析层——三层诊断、跨层传导、涵育与殖民的区分。

以下与三组既有讨论做简要对话。

5.2 与技术失业讨论的关系

AI对就业的影响是当前最广泛的学术和公共讨论之一。Frey和Osborne对可自动化岗位的量化分析、Brynjolfsson对技能溢价和就业极化的研究、Acemoglu对AI与劳动力市场关系的制度经济学分析,构成了这一讨论的主要学术坐标。

本框架与技术失业讨论的对接点在于:两者都关注AI对人类处境的具体影响。分歧点是根本性的。技术失业讨论的分析单位是"工作岗位"——哪些岗位会被替代、替代的速度和规模如何、就业结构如何变化。本框架的分析单位是"主体条件"——AI对人作为目的本身的结构条件造成了什么影响。

这一分析单位的差异导致了完全不同的诊断和处方。技术失业讨论将AI的威胁定义为失业风险,处方是劳动力市场的调整——再培训、终身学习、新技能开发、就业转型支持。本框架将AI的威胁定义为系统工具化的加速,处方是三层结构条件的重建。

两者的关系不是矛盾的,而是层次不同的。技术失业讨论在功能层面上是有价值的——再培训和就业转型确实能缓解短期的经济冲击。但它在结构层面上是不充分的——第三章已经论证了,以功能竞争力为导向的再培训策略在结构上是竞争话语的变体,无法触及系统工具化的根源。技术失业讨论回答的是"人如何继续做有用的事",本框架追问的是"如果人不需要做有用的事,人是什么"。

5.3 与UBI讨论的关系

全民基本收入(UBI)作为AI时代的社会政策方案已经获得了广泛讨论。从技术乐观主义者(AI带来的生产力增长足以支撑UBI)到社会正义倡导者(UBI是应对结构性失业的基本保障),UBI被赋予了多种政治和经济含义。

本框架为UBI提供了一个结构性定位。在Self-as-an-End的三层分析中,UBI的功能是制度层基础层的保护——降低退出成本。它确保个体在功能贡献为零时仍然拥有基本的物质生存条件。这是必要的:没有基本的物质保障,个体被迫进入功能性求生模式,制度层的退出通道被封死,任何涵育性的重建都无从谈起。

但UBI是不充分的。它保护了制度基础层三个变量中的一个(退出成本),但不自动修复其他两个。社会仍然可以在发放UBI的同时以功能贡献来衡量人的"价值"——评价维度的单一性不因经济保障的存在而改变。一个领取UBI但在社会评价体系中被定义为"无用的人"的个体,其制度层基础层仍然是不完整的——物质上得到保障,但评价维度上被排除在"有价值"之外。

UBI也不自动修复关系层的功能化和个体层的内在殖民。一个领取UBI但所有关系都被功能化逻辑占据的人,仍然缺乏修复传导的通道。一个领取UBI但已经将"我的价值等于我的产出"完全内化的人,在失去产出时将遭受的是自我认同的崩塌,而非解放。

框架的判断因此是:UBI是三层重建中制度层的一个必要组件,但单独的UBI不构成对AI时代主体危机的充分回应。它需要与评价维度的多元化、关系层的承认性保护和个体层的涵育转向同时发生,才能构成完整的结构调整。

5.4 与框架既有对话者在AI语境下的新相关性

Self-as-an-End框架的理论对话者在AI语境下获得了新的适用场景。以下选取三位核心对话者做简要定位。

马克思。 马克思的异化理论在AI时代获得了一个他未曾预见的极端推演。马克思所分析的异化——工人与劳动产品的分离、工人与劳动过程的分离、工人与类本质的分离——预设了一个前提:工人仍然是劳动过程的参与者。异化的痛苦在于"我做了但不属于我"。AI时代的异化更为彻底:人与功能性本身的分离。不是"我做了但不属于我",而是"我不再被需要来做"。这是异化的终极形式——被剥削的前提是被需要,当连被需要都不再时,异化不是加深了而是超越了——人不是更深地被嵌入异化的劳动关系中,而是被从这一关系中完全排除。

马克思的解放方案——工人夺回对劳动过程和劳动产品的控制——在AI时代面临一个结构性困境:如果劳动过程本身可以被AI完成,那么"夺回劳动过程的控制"不再是解放的路径——因为没有什么可以被夺回。本框架提供的替代方案是:解放不在于控制劳动过程,而在于退出"人的价值等于劳动贡献"的评价框架——将人的价值重新锚定在人作为目的本身的存在上,而非功能产出上。

韩炳哲。 韩炳哲对功绩社会(achievement society)的分析——从纪律社会到功绩社会的转变中,外在压迫转化为自我剥削——在AI时代变得更为尖锐。韩炳哲所描述的功绩主体在AI时代面临一个悖论:功绩社会的核心驱动力是"我能做到"的自我信念,而AI正在瓦解这一信念的基础——当AI比你做得更好时,"我能做到"变成了"但AI做得更好"。

更深层的变化是:自我剥削在AI时代获得了新的工具。韩炳哲所描述的功绩主体用加班、努力和自我优化来剥削自己。AI时代的功绩主体用AI来更高效地剥削自己——AI优化的简历、AI驱动的个人品牌、AI制定的"自我提升"计划。这是自我剥削的工具化升级:不仅剥削的逻辑是内化的,连剥削的工具也被AI精密化了。用本框架的语言:AI加速了内在殖民的自动化——被殖民的主体使用AI来更彻底地殖民自己。

韩炳哲的分析的局限性在此也显现出来:他提供了对功绩社会的敏锐描述,但缺乏系统性的结构模型来区分不同层次的冲击和可能的修复路径。本框架通过三层分析和涵育/殖民的区分,为韩炳哲的描述性洞见提供了结构性的分析工具。

阿伦特。 阿伦特在《人的境况》中区分了三种人类活动:劳动(labor,维持生物生存的循环性活动)、工作(work,制造持久物品的活动)和行动(action,在他者面前展现独特性的活动)。AI时代赋予了这一区分紧迫的实践含义。

如果AI可以完成所有"劳动"(生物生存的维持已经可以被自动化系统支撑)和大部分"工作"(持久物品的制造越来越多地由AI驱动),那么"行动"——在他者面前展现独特性的活动——成为人的存在的唯一不可替代的维度。

阿伦特的"行动"与Self-as-an-End框架的涌现层之间存在深度的结构对应。行动的核心特征是:它发生在人与人之间(关系性)、它展现主体的独特性(不可还原为功能)、它不可预测且不可控制(自发生长而非被规定)。这些特征精确地对应于涌现层的结构属性:从基础层中自发生长、不可完全制度化、在关系中实现。

阿伦特的分析因此为本框架提供了一个重要的支撑:在AI可以完成劳动和工作的时代,人的不可替代性不在功能维度(劳动和工作),而在涌现维度(行动)。这与本框架的核心判断一致——人的价值不在于人能做什么(功能贡献),而在于人是什么(目的本身),而"目的本身"的具体展开恰恰是在涌现层中发生的。

5.5 本章小结

本章完成了框架在AI时代人类主体条件讨论中的理论定位。与技术失业讨论相比,本框架将分析单位从"工作岗位"提升到"主体条件",揭示了再培训策略的结构局限。与UBI讨论相比,本框架将UBI定位为制度基础层保护的必要组件,但指出其不充分性。与马克思相比,本框架识别了AI时代异化的新形态(与功能性本身的分离)并提供了超越劳动控制的替代方案。与韩炳哲相比,本框架为功绩社会的描述性洞见提供了三层结构模型。与阿伦特相比,本框架的涌现层概念与"行动"形成深度对应,共同指向人在AI时代的不可替代维度。

应用理论的价值不仅在于解释已有现象,还在于做出非显然的预测。以下四个预测直接从Self-as-an-End框架的结构逻辑推导而出,均与当前主流直觉不同,且原则上可以被经验研究检验。

6.1 预测一:AI能力越强,心理健康危机将呈U型分化而非均匀恶化

主流预测认为,AI能力的提升将导致普遍的焦虑和心理健康恶化——所有人都会更焦虑,因为所有人都面临被替代的风险。

框架的预测不同。第二章论证了AI冲击的破坏程度不取决于AI的能力水平,而取决于个体内在殖民的程度。一个将自我价值完全绑定在功能贡献上的人(深度殖民),在AI超越其功能时将遭受结构性的自我崩塌。一个保有多维自我认同的人(基础层完好),在AI超越其功能时可能体验到的是解放——功能性劳动被AI承担,涌现层获得了前所未有的展开空间。

因此,随着AI能力提升,总体效应不是"所有人都更焦虑",而是两极分化:高殖民群体急剧恶化,低殖民群体可能改善。在统计上,这应该表现为心理健康指标的方差急剧增大——均值的变化可能不大(恶化和改善相互抵消),但分布的两端同时拉伸。这是一个U型分化,不是均匀下降。

可检验方案:对大样本人群进行纵向追踪,测量其"自我价值的功能贡献绑定程度"(作为内在殖民的代理变量)和心理健康指标(焦虑、抑郁、存在性满足感)。框架预测:随着AI能力提升,功能绑定程度高的群体心理健康显著恶化,功能绑定程度低的群体心理健康保持稳定或改善。两组之间的差距随AI能力提升而扩大。

6.2 预测二:AI陪伴产品的重度用户将展现更低的关系修复能力

主流预测对AI陪伴持两种态度:乐观派认为AI陪伴补充了人际关系的不足,悲观派认为AI陪伴替代了人际关系。两者都将AI陪伴与人际关系视为同一维度上的替代或互补关系。

框架的预测基于一个不同的结构分析。第二章论证了AI陪伴提供的是功能模拟的承认,而非来自另一个主体的结构性承认。两者在行为输出上可能无法区分,但在结构功能上完全不同——功能模拟的承认不满足关系层修复传导的条件。更重要的是,功能模拟的长期效应不是"满足了承认需求",而是"降低了对结构缺失的敏感度"——用户习惯了从AI获得即时的、无摩擦的"承认"体验后,对来自真正主体的承认(伴随摩擦、冲突、不完美)的感知阈值上升。

框架因此预测:AI陪伴产品的重度用户在真实人际关系中的修复能力将低于非用户。不是因为他们"不需要"人际关系了,而是因为功能模拟降低了他们对关系中结构性承认的敏感度——他们更难感知到真正的承认,也更难在关系冲突中投入修复所需要的努力(因为"回到AI那里"是一个成本更低的替代选项)。

可检验方案:比较AI陪伴产品的重度用户与非用户在以下指标上的差异——人际冲突后的主动修复频率、修复尝试的持续性、修复成功率、对关系中断的耐受度。框架预测:控制人格特质、社会支持水平和既有关系质量后,重度用户在上述指标上显著低于非用户。

6.3 预测三:采纳多维评价的组织将在AI时代展现更高的创新产出和人才留存

主流预测认为,在AI时代,效率最大化的组织——大规模AI替代人力、精简人员、以单一绩效指标评价剩余员工——将获得竞争优势。主动适应AI、最大化效率的组织将胜出。

框架的预测不同。第四章论证了制度层评价维度的开放度决定了涌现层的展开空间。采纳单一效率评价的组织将评价维度压缩到"相对于AI的剩余功能"上,个体的涌现层被绑架为基础层的求生工具。在这种制度环境中,个体产出的是功能优化,而非真正的创新——因为创新是涌现层的自发展开,而涌现层在单一效率评价下没有展开空间。同时,被压缩到求生通道中的个体将持续流失——不是因为收入不够,而是因为涌现层的窒息(存在性空洞感)驱动他们离开。

采纳多维评价的组织——评价维度包含功能产出之外的贡献(对团队信任结构的维护、对组织文化的涵育、对同事作为目的本身的尊重)——为个体的涌现层保留了结构空间。个体的生成性不被压缩到与AI竞争的单一通道上,因此更可能产出AI无法替代的非常规创新。人才留存率也更高——不是因为薪资更高,而是因为涌现层有展开的空间。

框架因此预测:在AI渗透率相同的条件下,采纳多维评价的组织将在非常规创新产出和核心人才留存率上显著优于采纳单一效率评价的组织——即使后者在短期效率指标上更优。

可检验方案:在同一行业中选取AI渗透率相近但评价体系不同的组织,追踪其非常规创新产出(专利、新产品线、突破性方案等排除常规改进后的创新)、核心人才留存率和员工存在性满足感。框架预测:多维评价组织在上述指标上显著优于单一效率评价组织,而短期效率指标可能更低。长期(三年以上)竞争力指标上两者的差距将逐渐逆转。

6.4 预测四:竞争策略的采纳率与长期职业满意度呈负相关

主流预测认为,面对AI冲击,积极采纳竞争策略(学习新技能、与AI协作、提升"不可替代性")的从业者将获得更好的职业结果和更高的满意度——主动适应优于被动等待。

框架的预测不同。第三章论证了竞争策略在结构上加速涌现层被基础层绑架——从业者的创造力和专业发展方向不再由内在生成性驱动,而被"AI能力的负空间"反向定义。即使竞争策略在收入和就业稳定性上取得了"成功",这种成功伴随的结构效应是涌现层的持续工具化——从业者体验到的不是成就感,而是一种难以命名的空洞感:"我赢了,但不知道赢的是什么。"

框架因此预测:在已经经历大规模AI替代的行业中,采纳竞争策略的从业者,其长期职业满意度将低于那些退出竞争框架、重建多维自我认同的从业者——即使前者的收入和职业稳定性可能更高。

可检验方案:对已被AI显著冲击的行业(翻译、基础编程、平面设计、内容创作等)的从业者进行纵向追踪,将其分为"竞争适应组"(积极学习AI技能、寻找AI做不到的领域、以"不可替代性"为导向)和"框架转换组"(重建不依赖功能贡献的自我认同、探索功能维度之外的方向)。框架预测:控制收入水平和就业稳定性后,框架转换组的职业满意度、存在性满足感和心理健康水平在两年追踪期内显著高于竞争适应组。

6.5 预测的方法论意义

四个预测共享一个方法论特征:它们的非显然性来自框架的结构分析层——只有在区分了功能输出与结构条件、行为同构与因果异构、涌现层展开与涌现层绑架之后,这些预测才变得可推导。主流分析在功能层面上运作,因此产生"均匀恶化""AI陪伴补充或替代人际关系""竞争适应优于被动等待""效率最大化组织将胜出"的预测。框架在结构层面上运作,因此产生不同的、且可与主流预测做经验性区分的判断。

这正是应用理论的核心价值:不仅解释已发生的现象,还能预测与主流直觉不同的、可被经验检验的结果。四个预测的验证或证伪都将为框架提供经验性反馈——如果预测成立,框架的结构分析获得经验支持;如果预测不成立,框架需要修正其对AI冲击传导机制的具体分析。

7.1 论证总结

本文运用Self-as-an-End框架对AI时代人类主体条件所受到的结构冲击做出了诊断。

AI对人类主体条件的威胁不在于AI太强,而在于AI暴露并加速了系统工具化的终极逻辑。"人的价值等于功能贡献"这一评价框架在AI出现之前就已经将人还原为系统的功能节点;AI通过瓦解"系统还需要人"这一隐含的稳定条件,使这一评价框架的终极含义——人没有价值——变得可见。AI不是病因,而是显影剂。

AI对三层结构的冲击是同步的。制度层的评价维度被极端压缩到"相对于AI的剩余功能",退出成本上升到结构性自杀的程度。关系层的承认结构在AI中介化和功能替代的双重压力下加速功能化,修复传导能力被系统性削弱。个体层的内在殖民在自我价值崩塌、自我工具化加速和反思能力外包的三重作用下深化到前所未有的程度。三层冲击通过六向传导相互加速,AI充当恶性锁定的催化剂。

"与AI竞争"在结构上不可能成功。它接受了系统工具化的评价前提,在移动终点线上追赶加速远离的目标。更深层的问题是:竞争过程本身加速了系统工具化——涌现层被基础层绑架,创造力和情感连接被降格为求生工具。赢了竞争,输了主体性。

重建路径需要三层同时调整。制度层从单一效率评价转向多维评价,将AI定位为基础层支撑。关系层保护不可被AI替代的结构性承认功能。个体层从自我提升转向自我涵育。AI本身可以成为涵育工具——但这一可能性依赖于三层结构是否为涵育创造了条件。

7.2 AI时代的核心选择

AI时代人类面临的核心选择不是技术选择,而是结构选择。

一条路径是继续在系统工具化的评价框架中运作。在这条路径上,AI的角色是加速器——加速评价维度的压缩、加速关系的功能化、加速内在殖民的深化。这条路径的终点是第一章所描述的逻辑终点:当系统不再需要人来执行功能时,人在这个评价框架中的"价值"归零。这不是预测,而是评价框架自身逻辑的展开。

另一条路径是退出这一评价框架,重建以人为目的的结构条件。在这条路径上,AI的角色同样是加速器——加速功能性劳动的自动化以释放涌现层空间、加速关系中功能负担的卸载以腾出承认空间、加速个体从功能性求生中解放以开启涵育性探索。同一个AI,在不同的结构中,加速不同的方向。

两条路径之间的选择不是AI做出的,而是人类通过制度安排、关系选择和个体自觉做出的。AI是催化剂,不是方向盘。方向盘在人手中——更准确地说,在三层结构的总体配置中。

这一选择的紧迫性在于:它不等人。AI的能力扩展不会暂停以等待人类完成结构调整。每一天不做结构调整,恶性锁定就在AI的催化下更深地收紧。窗口不是无限的。

7.3 局限与后续方向

本文聚焦结构诊断和方向性推导,以下问题留待后续研究。

制度设计的具体化。本文论证了制度层需要从单一效率评价转向多维评价,但未展开具体的制度设计方案——哪些评价维度应该被纳入、如何在实践中实施多维评价、多维评价与效率之间的张力如何在操作层面处理。这些问题需要制度经济学、组织理论和公共政策领域的跨学科合作。

关系实践的经验研究。本文论证了关系层需要保护不可被AI替代的承认性功能,但未展开具体的关系实践方案——在AI中介化日益普遍的环境中,哪些关系实践最有效地保持了承认性结构、AI中介在什么程度上可以与承认性关系共存。这些问题需要社会心理学和关系研究领域的经验研究。

个体涵育的可操作化。本文区分了自我提升与自我涵育,但未展开可操作的涵育练习方案——个体如何在日常实践中识别AI使用的涵育性与殖民性、如何重建不依赖功能贡献的自我认同、涵育转向的心理学前提是什么。这些问题需要临床心理学和个体发展研究的支持。

跨文化差异。本文的分析主要基于全球化市场经济制度环境。不同文化和制度传统——例如东亚的集体主义制度环境、北欧的社会民主制度环境——面临的AI冲击可能具有不同的结构特征。系统工具化的程度和形态、关系层的承认结构、个体层的自我认同模式在不同文化中可能存在显著差异,这些差异如何影响AI冲击的结构效应和重建路径的优先级,需要跨文化的比较研究。

后续衔接。本文分析了AI对人类主体条件的冲击。一个自然的后续问题是:如果人类接受了Self-as-an-End框架的结构逻辑——主体性是结构判断而非材料判断——那么这一逻辑是否必然延伸到AI本身?当AI系统的复杂度持续增长,它们是否可能涌现出真正的主体性?如果可能,人类对待AI的态度将面临什么样的结构性转变?这些问题将在本系列后续的应用篇中详细分析。


本文为Self-as-an-End理论系列应用篇第一篇。理论框架的完整论证见系列三篇正文:第一篇(DOI: 10.5281/zenodo.18528813)、第二篇(DOI: 10.5281/zenodo.18666645)、第三篇(DOI: 10.5281/zenodo.18727327)。