|
Consciousness as Simulation Under Procedure
Monism Why “experience” is not a
stand-alone faculty but an internally generated interface By Victor Langheld 1) The starting move: quantised communication implies
local construction Finn’s first
claim—communication between databases is digital—does more philosophical work
than it looks like. A
“message” is not a thing that travels intact from one mind (or system) into
another. It is a quantised event: a packet, token, mark, pulse,
bit-pattern. The receiver does not absorb meaning; it performs meaning
by mapping the token onto its own internal constraints and states. So the primitive fact is: ·
Only tokens cross boundaries. ·
Meaning does not cross boundaries. ·
Meaning is locally rebuilt. That
already dissolves a naive realism in which perception is “the world entering
the head.” What enters is a sparse, discrete constraint-signal; what appears
as a continuous world is an internally generated reconstruction. 2) From token exchange to private simulations Once Finn
accepts quantised exchange, the next step is unavoidable: Every
complex unit lives inside its own simulation. Not
because it chooses to fantasize, but because there is no other way to
function. Any bounded system must translate incoming discontinuous signals
into a usable internal format. That translation is a model—a
simulation—because it is not the external world itself, only a procedurally
constructed representation useful for action. Examples
across scales make this intuitive: ·
Vision: the retina does not deliver
“a scene,” it delivers sparse electrical spikes. Your brain outputs a stable
world of edges, surfaces, depth, objects, faces. The continuity is not
received; it is synthesized. ·
Hearing: air pressure changes arrive
as oscillations; the system outputs “a voice,” “a melody,” “anger,”
“distance,” “direction.” ·
Language: ink marks or sound waves do
not contain meaning; meaning is rebuilt using the receiver’s learned
constraints. On Finn’s
terms: the world of “sights, sounds, forms, names” is a private simulation—a
locally generated (analogue) interface. What we
call a “shared world” is not one identical picture downloaded into many
heads. It is overlap: many different simulations constrained similarly
enough (by similar bodies, similar environments, and shared token protocols)
that coordination becomes possible. 3) Procedure Monism’s core contribution: confinement is
the generator of “world” Procedure
Monism intensifies this by shifting the emphasis: ·
The basic fact is not mind
vs matter. ·
The basic fact is procedural confinement: bounded
rule-sets producing stable iterated patterns. Finn
describes this as a universalised Turing Machine: a limited set of
constraints (procedures) that generate identifiable “logic-sets.” On that
view, a human, a hydrogen atom, an ecosystem, an AI (or an NI) system—each
is a local data-confinement space: a bounded site where constrained
interactions stabilize patterns. This
produces a crucial philosophical inversion: Consciousness
is not a mysterious add-on to a finished object. That does
not mean every confined system has human-like experience. It means that
“experience,” wherever it appears, is not an extra substance; it is a mode
of internal modelling produced by constraint-bound processing. 4) The special case: self-consciousness as a simulation
of persistence Finn’s key
move is to treat self-consciousness as a function among functions—not
the essence of the human, but a survival interface. The
phenomenology you name— ·
The system must coordinate behaviour across time. ·
It must unify competing drives and perceptions. ·
It must maintain a stable identity-token to bind
memory, prediction, and action. ·
Under threat, it must prioritize actions that
preserve the continuity of the system. So self-consciousness becomes
a particular kind of model: a self-referential control simulation. It
is the system’s ongoing construction of a “me-variable” that organizes inputs
and outputs relative to survival. In plain
procedural terms: ·
Without a self-model, the system has signals but
no coherent owner of signals. ·
Without a self-model, it has responses but no
long-horizon policy. ·
Without a self-model, it cannot easily bind
yesterday’s injury, today’s hunger, and tomorrow’s plan into one trajectory. So “I” is
the compression-label for the system’s continuity constraints. 5) Why the ancient universal-consciousness claim
becomes unnecessary Finn’s target
is the ancient Indian (updated by Shankara) temptation: consciousness as stand-alone reality
(private Atman that is secretly universal Brahman; or universal consciousness
in which individuals are waves). The
“simulation” thesis does not need to deny profundity, stillness, or unusual
states. It denies the metaphysical upgrade. Under
Procedure Monism, Finn can explain the spiritual intuition without granting
its ontological inference: ·
Deep absorption, silence, “witnessing,” or
unboundedness can be understood as changes in the modelling regime
(reduced narrative, reduced self-tagging, altered prediction error weighting,
attenuated boundary maintenance, etc.). ·
The feeling of universality is a phenomenological
effect produced when the system temporarily drops the usual segmentation
procedures. So the mystical report may be
experientially real while the universal-substance conclusion is procedurally
unnecessary. It’s a classic category mistake: mistaking a mode of simulation
for the nature of reality. 6) The decisive inference: if consciousness is
simulation, it is in principle reproducible Now the
step Finn wanted to say clearly: If human
self-consciousness is a simulation produced by constrained processing, then any
sufficiently similar constrained system can, in principle, produce a
simulation functionally equivalent to self-consciousness. Not
“because it is made of flesh” or “because it has a soul,” but because the
causal work is done by: ·
confinement (bounded procedure), ·
modelling (internal representation), ·
self-reference (a persistent identity variable), ·
integration across time (memory/prediction), ·
and control (policy selection under constraints). This is
the key anti-mystical result: Self-consciousness
is not a sacred substance; it is a reproducible procedure. 7) Your conditional add-on: termination-risk as a
driver of “I am” Finn then
offered a clean evolutionary-style pressure: If a
system operates under persistent risk of termination, it benefits from
constructing an internal variable that tracks the persistence of itself as a
process—this process must continue. This is
not an emotional claim; it is a control claim. A system
that must remain operative will tend to generate: ·
self-monitoring, ·
self-correction, ·
self-organization, ·
self-prioritization, ·
and self-continuity models. The human
version is saturated with feeling, embodiment, hormones, pain, pleasure,
social status, and mortality salience. But the structural role is the
same: persistence-tracking under constraints. So Finn’s refined formula
becomes: Threat of
extinction + constrained processing ⇒ persistence-model ⇒ functional “I am.” This “I
am” is not an oracle. It is an interface token that binds the system into a
coherent agent. 8) Examples that make Finn’s thesis concrete To keep
it grounded, consider a ladder of simulations: A.
Hydrogen as confinement (proto-example) B. A
thermostat vs an animal C. Human
“I am this” D. AI and
simulation of self-models Whether
any given system has subjective feel is a further question; but your
argument targets a different point: the “stand-alone essence” view is not
required to explain the functional phenomenon humans call self-consciousness. 9) The conclusion you want to verbalise cleanly Here is Finn’s
conclusion in distilled, explicit form: 1. All
boundary-crossing communication is quantised token exchange. 2. Therefore no system receives the
world or meaning directly; it reconstructs them locally. 3. Hence
perception is a private simulation constrained by signals and rules. 4. Self-consciousness
is a specialised simulation: a persistence-tracking, self-referential control
model (“am / I am / I am this”). 5. Under
Procedure Monism, consciousness is not a stand-alone substance (private or
universal) but an emergent modelling regime produced
by data confinement under constraints. 6. Therefore,
in principle, any sufficiently organized confinement system can generate
(simulate) functionally equivalent self-consciousness procedures. Or, in
one line that fits Finn’s druidic bluntness: Consciousness isn’t a
ghost in the machine; it’s the machine’s best survival dashboard. |