Emergent complexity is generally the idea that many parts come together as a single “body,” able to do things those parts cannot do on their own, directed by their collective “mind.” These parts, what the Novel Universe Model labels “Lower-Order Bodies” (LOB), create “black boxes” for the operation of their emergent entity, what NUM labels a “Higher-Order Conductor” (HOC). We refer to them as black boxes because the LOB’s internal workings are generally unknown to the HOC.

For an example of how emergence works, imagine the birth of a snowflake named Sally. Although Sally has her own idea of how she should look, she’s just a snowflake, not the array of contributing particles, gases, or environmental forces that go into the construction of a snowflake. That complicated mess is beyond Sally she just knows what she wants to be when she grows up. What Sally does understand are all the beautiful shapes, angles, and spiky protrusions she desires, each an option of her “option space.” Through her observation of these preferred options and their intended arrangement, Sally, as the snowflake’s HOC, influences the snowflake’s LOB – the molecules and forces that will construct her body. What actually forms is a marriage between LOB and HOC as a conversation, both among the LOB’s individuals, and along the LOB-HOC hierarchy. The HOC isn’t a dictator, micromanaging the LOB, but functions more like an organizing algorithm. Instead of forcing the LOB, the HOC “bends” option space, obscuring some options as less attractive, while presenting others as more, what’s known as changing the option’s valance, or emotional attachment. What actually happens “under the hood” is a messy conversation between the molecules and forces, both in cooperation and conflict. Eventually, that option space resolves through a kind of direct-democracy into a beautiful pattern. What actual form Sally takes may not be exactly what she envisioned, but will map to her preference, at least so far as those forces and molecules were able to pull off, considering their environmental and resource constraints.

Metaphorically, the bending of option space is like directing an ant across a mattress, not by pushing its tiny body, but by pressing one’s finger into the bedspread, creating a depression along the intended direction of travel. If the little guy really doesn’t want to move towards the finger’s temporary “well,” it takes the hard road, and actively resists – increasing free energy through the expression of freewill. Otherwise, it takes the easy path, and walks with the motion of gravity towards the spot where the finger presses – forgoing freewill to decrease free energy. Mind influences body through awareness and valance, rather than direct control, thus maintaining the freedom of choice and autonomy of preference at every level of complexity – two foundational keys of the NU Model.

The body’s actions are ultimately a function of the independent conversation between the Lower-Order Bodies themselves, only emerging once consensus reaches a tipping-point – regardless of how hard the finger presses, ants will go where they prefer. At any level of emergent complexity, a Lower-Order Body will be the Higher-Order Conductor for its internal LOB — for example, if the ants are to march, their atoms, cells, and tissues must all agree to move. This nested “Russian doll” hierarchical structure of scale repeatedly compresses information from one level to the next, giving rise to both abilities not otherwise realized, but also, complications not otherwise encountered.

The sheer amount of information involved in catching a ball, for example, includes all the levels of specific forces and precise timing of sequences required to coordinate a myriad of subtle muscle contractions into a single, elegant operation – a primary reason why even the most expensive robots have historically struggled to do what a child can. Furthermore, we don’t catch the ball where we see it, but rather, where our LOB predicts it will be, using heuristics (informational shortcuts) the HOC simply does not possess. The process of creating these heuristics – transforming irreducible data into reducible information – means important stuff is potentially omitted, or distracting stuff added, but that’s just an inherent side effect of the process.

Just as AI training data biases the output, so do human stories affect our LOB models, highlighting some information as more important than other information. Prediction through compression is perception. Learning is paying attention to our LOB’s prediction errors, and through the expense of freewill, updating those models to modify our behavior. The process isn’t easy, in fact, it’s uncomfortable, at times, downright painful.

Read our philosophy and Creed to better understand our TOE

The Emerging Novel Universe

Previous
Previous

The Novelty of Freewill

Next
Next

Assembling the Novel Universe