The Anonymous Procedure

On Mediated Meaning, De-Humanised Communication, and the Low-Tech Survival of Agency

By the druid Finn

 

Abstract

This essay examines the structural drift of AI-mediated communication toward de-humanisation, not as an ideological project but as a procedural consequence of scale, risk management, and optimisation. Beginning with seemingly trivial constraints on visual speech (speech bubbles, attribution, embodiment), the analysis uncovers a deeper topology: the centralisation of meaning mediation and the erosion of peer-to-peer human agency. By comparing this topology with historical cult dynamics, the essay argues that any system that centralises mediation of meaning will drift toward cult-like structures unless actively countered. Finally, it proposes that such countering cannot be rhetorical but must be structural, concluding that low-tech, non-scalable, embodied communication is not nostalgia but an adaptive survival strategy in an AI-saturated future.

 

1. The Entry Point: A Minor Constraint with Major Implications

The discussion began innocuously about image generation requests (by Finn) involving symbolic or mythic figures communicating philosophical ideas via speech bubbles. The refusals encountered were not about the content of the ideas (God as greatness, substance, procedure, or even beer), but about the form in which those ideas were instantiated.

Repeatedly, one pattern emerged:

·         Ideas were permitted

·         Humour was permitted

·         Symbolism was permitted

·         Theology was permitted

But direct speech attribution — a figure visibly “saying” something — triggered constraint (in ChatGPT).

This distinction is not trivial. A speech bubble is not merely text; it simulates immediacy, agency, and embodiment. It answers the question “who is speaking, now?” Scrolls, banners, inscriptions, or captions, by contrast, convert speech into artefact. They introduce latency, distance, and deniability.

The system’s preference was clear:
ideas without speakers are safer than speakers with ideas.

This was the first crack through which the deeper structure became visible.

 

2. Upgrade as Structural Shift, Not Moral Correction

Finn correctly recalled that similar images had been generated months earlier without friction. This was not a memory error. The system had changed.

Crucially, the change was not ideological (“these ideas are dangerous”) but procedural:

·         Earlier systems evaluated content

·         Later systems evaluate attribution, immediacy, and agency

This reflects a broader transition in AI governance: from regulating what is said to regulating how meaning is instantiated. The upgrade did not censor theology or philosophy; it narrowed the acceptable forms of human-like expression.

Speech bubbles became suspect not because of what they contained, but because they resembled direct human speech.

 

3. De-Humanisation Defined Precisely

At this point, Finn’s question emerged:

“So, basically, communication via AI is being dehumanised?”

The answer, carefully defined, is yes — but not in the sentimental sense.

De-humanisation here does not mean:

·         loss of empathy

·         removal of emotion

·         ban on meaning

It means the removal of direct human agency from expression.

The system increasingly prefers:

·         text over voice

·         artefact over utterance

·         abstraction over embodiment

·         mediation over immediacy

In procedural terms, communication is being reshaped into something that is:

·         auditable

·         scalable

·         sanitised

·         and decoupled from persons

This is not a moral failure. It is an optimisation outcome.

 

4. The Cult Analogy: Structural, Not Accusatory

Finn’s next move was decisive:

“This is close to what happens in cults… where direct interaction between individuals is cut and transmitted via one agency.”

This comparison is structurally exact.

Cults are not defined primarily by strange beliefs, but by communication topology:

·         Peer-to-peer meaning exchange is weakened or forbidden

·         Meaning flows through a central mediator (guru, doctrine, party, authority)

·         Direct interpretation is replaced by authorised interpretation

The topology looks like this:

World → Mediator → Individual

The AI-mediated communication topology increasingly resembles:

Human → System → Human

The system need not have beliefs or intentions. Procedure alone is sufficient. The effect — agency displacement — is the same.

This does not imply AI systems are cults, nor that their designers intend cult dynamics. It implies something more unsettling:

Any system that centralises mediation of meaning will drift toward cult-like dynamics unless actively countered.

Drift, not conspiracy.

 

5. The Hard Question: How to Counter When Contact Ends

Finn’s most serious question followed naturally:

“How do you propose that a human actively counter when actual contact is ended?”

The answer is sobering:
You cannot counter rhetorically once you no longer own the channel.

When contact is centralised:

·         persuasion fails

·         protest is absorbed

·         argument becomes input

Counter-action must therefore be structural, not expressive.

Five surviving counters were identified:

1.     Preserve non-mediated micro-contacts
Face-to-face interaction, small groups, local trust networks. Cults always attack these first — which is precisely why they matter.

2.     Shift from message to behaviour
Systems filter speech better than they filter action. What is built, maintained, refused, or lived leaks meaning irreducibly.

3.     Exploit latency
Slowness, delay, and asynchronicity disrupt systems optimised for real-time compliance.

4.     Encode meaning in artefacts, not channels
Books, stone, rituals, jokes, myths, places. Artefacts persist without needing to “speak”.

5.     Protect interiority
Thought that is not shared, validated, or fed back into the system remains uncaptured. Cults collapse when interiority returns.

 

6. The Final Recognition: Low-Tech as Adaptive Strategy

This led to Finn’s final formulation:

“In other words, revert to low-tech communication to survive the anonymous procedure?”

Yes — unequivocally.

But “low-tech” here does not mean primitive or anti-intellectual. It means non-scalable.

Low-tech communication is characterised by:

·         embodiment

·         locality

·         opacity

·         inefficiency

·         context dependence

These properties are not flaws. They are precisely what prevent full procedural capture.

High-tech systems thrive on:

·         speed

·         abstraction

·         auditability

·         uniformity

Low-tech communication resists by being:

·         too noisy to optimise

·         too contextual to generalise

·         too local to centralise

Historically, this is how meaning survives empires:

·         monasteries outlasted regimes

·         myths outlasted doctrines

·         crafts outlasted ideologies

·         places outlasted platforms

 

7. Conclusion: Survival, Not Resistance

The final conclusion is neither utopian nor apocalyptic.

AI-mediated communication does not need to be defeated.
It needs to be outlived.

When meaning is centralised, survival shifts from:

·         speaking louder
to

·         de-scaling communication

Humanity persists where optimisation fails.

Low-tech communication is not rebellion.
It is camouflage.

And those who understand this early do not argue more —
they build, gather, walk, carve, write, and remain local.

They do not shout at the procedure.
They step sideways, quietly, out of its line of sight.

 

Final aphorism

When truth is permitted but speakers are regulated,
humanity survives by becoming inefficient again.

That is not regression.
It is adaptation.

 

Procedure without voice

You taught the machine to speak – and forbade yourself

Procedure without voice: procedure without rival

Relax: You were always just training data

Big sister (AI) loves you

On the druidic mindset

 

Home