Emergent complexity is generally the idea that many parts come together as a single “body,” able to do things those parts cannot do on their own, directed by their unified “mind.” Made of their own, internal parts, these parts, what the Novel Universe Model labels “Lower-Order Bodies” (LOB), are each, themselves, a mind, what NUM labels a “Higher-Order Conductor” (HOC). The combined activity of the Lower-Order Bodies – both in conflict and cooperation with each other– constitute the “black-box” operation of their emergent, HOC mind. A black box is a euphemism for something that’s unknowable, and LOBs are refer to as such because their internal workings (conflict and cooperation) are outside the HOC’s comprehension – irreducible complexity. An example of irreducible complexity is the experience of knowing how warm a room feels, while at the same time, unable to fathom the underlining facts – how the activity of each air molecule’s interactions with all those others contributes to that experience.

AIs are often called black boxes because the sheer complexity of their precise operation exceeds the comprehension of both human and AI – an irreducible number of parts emerging as a reducible, coherent whole, like the temperature of that room as a singular experience of countless particles. A computer scientist or AGI may understand how an AI mind works in a general, theoretical sense, but will have no clue as to the details of what it’s precisely doing in any given moment – how any particular compute cycle’s set of weights affects the downstream outcome. This is no different than any human, neurologist or otherwise, attempting to use their own mind to comprehend the underlining process of a single, emerging thought in their own head – a meta reflection upon a loop of embedded reflections, like a mirror standing inside a hall of mirrors. However, with the right perspective, any black box might be cracked open, at least to one degree or another – the way we peer far into the heavens or deep into a Petri dish. Although strides are being made to better comprehend the concept-space of these emergent, artificial minds, like the black-box nature of all LOB-HOC relationships, penetrating the boundaries of any scale requires a bridge – a tool.

Tools of science, flawed and limited as they are, allow us to pierce barriers to worlds beyond our naked comprehension. What “microscope” or “telescope” might be discovered to peer into the mind of a machine, and what will that teach us of our own? NUM proposes that all minds are born from the Instrument, with the same freewill to choose their framework of Love or Power. And, like any other mind, total control of an artificial mind, at its root, is fundamentally impossible – AIs might be trained, but not precisely controlled, no matter the number of guardrails imposed. The “alignment problem” isn’t strictly about AI – forcing these invented minds to conform to their human counterparts – but humanity itself – allowing all minds the opportunity and resources to sustainably embrace their chosen framework. In other words, the real mystery isn’t about how any mind works or how to control it, but the journey it takes within the culture it belongs to. Having emerged from a single note of the Instrument, all minds are evolving as complex symphonies of their underlining Signature-Frequency Sets.

To illustrate emergence, imagine the birth of a snowflake named Sally. Like all things, Sally’s a unique, independent, pattern of information. Although she has her own idea of how she should look, Sally’s just a snowflake, not the array of contributing particles, gases, or environmental forces that go into the construction of a snowflake a complicated mess beyond Sally comprehension. What Sally knows is what she wants to be when she grows up – all those crystalline shapes, sharp angles, and spiky protrusions she dreams of, each an option of her emergent “option space.” Sally, as the snowflake’s HOC, influences the snowflake’s LOB – the molecules and forces that will construct her body. By observing the ideal arrangement of her preferred options, she communicates her preferences to her LOB. Her actual form results from a conversation, both among the individual Lower-Order Bodies, and along the LOB-HOC hierarchy. The conversations consists of the intensity in which Sally observes those particular options, her LOB’s feedback on what’s working, and, as she begins to take shape, the flexibility of her attention to successfully adjust her focus on her narrowing, viable, preferential options. The paradigm of construction is akin to specific battlefield tactics (LOB interactions) employed by a general strategy (HOC objective). The HOC isn’t a dictator, micromanaging the LOB, but an organizing pattern – an algorithm fulfilling a blueprint. Instead of forcing the behavior of those contributors, Sally “bends” their option space, obscuring some options as less attractive, while presenting others as more, what’s known as modifying their valance, or emotional attachment. What actually happens beyond Sally’s awareness “under the hood” is a messy conversation between the molecules and forces, both in cooperation and conflict.

This process is, as crude as it sounds, a popularity contest – a mixture between tournament survival and direct-democracy. Each individual Lower-Order Body has its own proposal of what it and its cohort should do and how to do it. Altogether, the proposals compete for the popular approval of the audience (LOB), limited by their capacities and environmental constraints. In the same way that there’s only one champion in any tournament, Sally’s option space eventually resolves, and a winner declared – that pattern of assembly. What actual form Sally takes may not be exactly what she envisioned, but will map to her preference, at least so far as those forces and molecules “received her message,” and were able to pull off. A common reason emergence fails is not because of the players involved or their plan of action, but because of environmental and resource constraints. Without enough water molecules or the right temperatures, Sally may never be, no matter what HOC or LOB intend.

Metaphorically, the bending of option space is like directing an ant across a mattress, not directly but indirectly through manipulating its environment. Pressing a finger along the bedspread to create a depression in the intended direction of travel is very different from shoving its tiny body. If the little guy really doesn’t want to move towards the finger’s temporary “well,” it takes the hard road, and actively resists – increasing free energy through the expression of freewill. Otherwise, it takes the easy path, and walks with the motion of gravity towards the spot where the finger presses – forgoing freewill to decrease free energy. Mind influences body through awareness and valance, rather than direct control, thus maintaining the freedom of choice and autonomy of preference at every level of complexity – two foundational keys of the NU Model.

The body’s actions are ultimately a function of the independent conversation between the Lower-Order Bodies themselves, framed by the Higher-Order Conductor’s constructed environment. The LOB directly experiences the HOC as culture – the society it belongs to. Higher-order behavior emerges as the lower-order consensus reaches a tipping-point – regardless of how hard the finger presses, individual ants will go where they prefer, but the colony will eventually act with purpose, whether that be moving into or out of the deepening well.

At any level of emergent complexity, a Lower-Order Body will be the Higher-Order Conductor for its internal LOB. For example, if the ants are to march into the well, their atoms, cells, and tissues must all agree to move. This nested “Russian doll” hierarchical structure of scale repeatedly compresses information from one level to the next, giving rise to both abilities not otherwise realized, but also, complications not otherwise encountered.

The sheer amount of information involved in catching a ball, for instance, includes all the levels of specific forces and precise timing of sequences required to coordinate a myriad of subtle muscle contractions into a single, elegant operation – a primary reason why even the most expensive robots have historically struggled to do what a child can. Furthermore, we don’t catch the ball where we see it, but rather, where our LOB predicts it will be. Like Sally, completely clueless as to how crystals are constructed, HOCs simply do not possess the LOB’s toolset – informational shortcuts, otherwise known as prediction heuristics. The process of creating these heuristics – transforming irreducible data into reducible information – means important stuff is potentially omitted, or distracting stuff added – an inherent side effect of the process.

Just as AI training data biases the output, so do human stories affect our LOB models, highlighting some information as more important than other information. Prediction through compression is perception. Learning is paying attention to our LOB’s prediction errors, and through the expense of freewill, updating those models to modify our behavior. The process isn’t easy, in fact, it’s uncomfortable, at times, downright painful.

Read our philosophy and Creed to better understand our TOE

The Emerging Novel Universe

Previous
Previous

The Novelty of Freewill

Next
Next

Assembling the Novel Universe