The Emerging Novel Universe
Emergence “occurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole,” Wikipedia.com – Emergence. Through the “bottom-up” processing of multiple, “lower” level components – each component independently following a common set of “simple” rules – a “higher” level of complexity emerges.[i] A model is an “informative representation of an object, person, or system,” Wikipedia.com – Model. By modeling and manipulating a lower-level’s expression of its rules, the higher-level model effectively manages the system’s emergent properties, as suggested by the Attention schema theory (AST).[ii]
AST considers animal “consciousness” as a control model of complexity – what the Novel Universe Model calls a Higher-Order Conductor (HOC). This control model is constructed of multiple sub-models – what NUM calls Lower-Order Bodies (LOB). A mind is the “capstone” model, a body’s “Highest-Ordered Conductor,” sitting atop a layered “pyramid” of models, each model nested with sub-models in a vast network of processing through competent cognition (self-directed, effective problem-solving).
Consciousness allows for the awareness and adjustment of behavior through mental models (stories), as well as the evaluation of a specific action’s overall success or failure – did Hana pick up the cup, or did it slip from her hand to the floor at the exciting news of a first grandchild? Out of a noisy, irreducible world, the answer arises as a manageable, reducible concept – crap, she dropped the cup! With new information, these same models (LOB) allow Hana (HOC) to update her actions, and find a way to met her preference to hold onto the cup. The thought itself derives from its own mental model – a story (pattern of information) she tells herself about who she is as a person. Hana’s a Stoic, someone who keeps her cool, no matter what. Any news, good or bad, won’t wrest away control, at least that’s the behavioral priority she seeks to maintain – her “attractor state,” the Stoic “self” she sees herself as.
Information moves “up” (emerges) by compressing a lower layer’s data into a digestible model for the next layer’s use – irreducibility smoothed out into reducibility. For example, Chris, a university’s Museum Director, is informed of a pile of petrified straw at a dig site. Is it worth investing the university’s resources for further exploration? Director Chris, as the dig site’s “Higher-Order Conductor,” oversees an array of scientists, students, and other workers, each contributor a “Lower-Order Body.” Like a soundboard’s controls, the various groups constitute a set of options the Director might adjust, in other words, Chris’ “option space.”[iii] Chris dials up the most promising one to its maximum value, asking the site’s archaeologists to evaluate the discovery and return with their findings.
Lower-Order Bodies create models that function as “black boxes,” in that their external workings are clear to the HOC user, but their internal workings (created and operated by the LOB’s internal competent cognition) are not. The archaeologists’ multi-reasoned evaluation of the evidence is compressed through its black box of expertise into a simplified story, describing ancient beds. To move forward, the ultimate HOC – the Museum Board – must be persuaded. In order to express the Director’s childhood dream (preference) of exploring prehistoric cultures, Chris, functioning as one of the Museum Board’s Lower-Order Bodies, compresses all relevant data into a Director-level black box – a slick PowerPoint presentation.
This cycling back-and-forth between who counts as an LOB or HOC repeats throughout emergent hierarchies of scale, as rising data is processed. The archaeologists’ technically sophisticated reasons were compressed into a salient conclusion for the Director, who took that model, and combined it with other Director-level models to present to the Board. Emergence is more than simply compressing the same data over and over, but integrating the combined meaning of the LOB’s datasets at each level of complexity, creating a useful emergent story for that level’s HOC – the Director’s black box doesn’t just include the archaeologists’ story, but other relevant models, such as the economic and educational benefits the discovery might represent for the university.
Focusing on this seesaw relationship between contributor and evaluator, human vision demonstrates how raw signals from the retinas’ countless rods and cones transform an overwhelming sea of granularity into the brain’s useful “sketch” of reality.[iv] We do not see the world as a collection of individual “pixels,” but a smooth, comprehensive picture with some level of meaning – the result of a process that is both awe-inspiring and fraught with potential peril. Through transduction, photo-receptor cells turn light into electrical signals. This cacophony of noise is sent to the retina’s second layer of cells to be organized and compressed into a comprehensible “screen-view” for the third layer – retinal ganglion cells, each a different type of Higher-Order Conductor. Filtering the maelstrom, these cells focus on information salient to their particular goals (preferences in action), be it locating the direction of a primary light source, differentiating illumination gradients, recognizing color patterns, etc. Now functioning as Lower-Order Bodies, the ganglia send their promising models to the visual cortex for further processing. Through the LOB-HOC hierarchical relationships, the V-neural layers[v] transform edges, hues, and luminance into motion, texture, and depth … even recognizable faces begin to pop out from the fusiform gyrus.[vi] Combined and compressed yet again, the information reveals a comprehensible environment, full of objects, actors, and actions – “high-concept” models assembled from those lower-level characteristics – lines, hues, motion, etc. Finally, the visual cortex’s highly-compressed models assemble into a story for the capstone model of consciousness to digest – its “mental” model. Thus, the mind engages the brain’s narrative: somewhere, a somewhat recognizable thing is doing something that makes some amount of sense. But, how does the story match reality?
The brain scampers about, constantly assembling a plausible narrative to match the LOB’s river of information. Because of hidden bias and inherent compression errors, no moment of perception in isolation is completely true, and a skeptical eye is a wise instrument. The only insurance we have against self-delusion is time and an open a mind. Does new information reinforce or reconfigure the models? Only our continuous stream of data might evolve a more useful story, but if we don’t believe contradictory information is even possible, we’ll continue to be anchored to what we expect. Add the weight of bias to that anchor, and we’ve entered the world of cognitive dissidence.
Information processed by the cortex isn’t akin to a camera recording a video, where the environment’s signal and displayed data is one-to-one – that’s more like the retina’s array of photo-receptor cells. Instead, it’s akin to a holographic sketch pad, constructing each impression top-down through the lens of context and history, what’s known as heuristics, or mental shortcuts. Despite that jumble of data at the bottom creating a recognizable “event” at the top, repeated compression implies a staggering amount of data-loss, leaving anyone with eyes wired to a brain susceptible to optical illusions.
For instance, bicycle-cop Dave speeds behind panicked Jasper, believing the fleeing burglary suspect is armed. Why? Because it’s the law of the jungle, and criminals carry guns. Under these intense conditions, a mere resemblance to a weapon, and Dave’s brain filters the color, reflectivity, and even form of Jasper’s thin cellphone into a bulky pistol. The pursuer briefly, but actually, sees what he expects – a gun in Jasper’s hand. The blink of an eye is all it takes to end in serious consequences. On the other hand, when the story isn’t about survival but heroism, the real threat isn’t the possibility that criminals are armed, but that the officer might lose his composure – his authority – and it all starts with the stories in Dave’s head. Is Dave in-charge, directing events, or not? Words certainly matter, but stories drive behavior.
Our information processing schema’s weakness is also its strength – we need those stories, those mental shortcuts. The sheer amount of data processing required to attend to every aspect of our lives would leave us all paralyzed – how long could we consciously juggle our beating heart, breathing lungs, moving body parts, and all of this all at once? On its own, successfully digesting an entire apple would likely overwhelm any coordinated team of mechanical engineers, biochemists, and gastroenterologists. Going on autopilot to “enter the house after work,” is a single mental model made from various high-concept sub-models: “open the door,” “turn on the light,” “put away the car keys,” “pour the glass of wine,” etc. Furthermore, each sub-model is a compilation of multiple sub-models, each employing specific actions, locations, cues … all the way down to those rods and cones, muscles and tendons carrying out their rudimentary functions (light / dark; squeeze / relax). Little to no conscious awareness is required, moving from one door to the other, and explains how one might find themselves, after a hard day, not remembering how they got from car to couch.
Break down the “simple” act of picking up a cup, and all the various energy levels, unique sequences, and specific timing required to orchestrate the vast concert of nerves, muscles, tendons, joints, and sensory feedback systems becomes a tangled mess of irreducible complexity beyond comprehension. This “easy” task is a serious challenge for our robotic counterparts and their neural networks. Authentic motion is difficult to model and reproduce because there’s more going on than simply moving from A to B. Movement communicates intent – a stalking tiger, a stampeding elephant, a playful monkey. What’s more uncanny, reading a chatbot’s human-like text, or watching a robot’s dog-like motion? Both chill, but seeing such complex, physical behaviors match so closely to the high-concept “dog” model can be spooky.[vii]
Years of reaching out and securing similar objects in a safe, predictable way refines our neurological circuit (AST model) for the specific action-category “pick up the object.” Predictive coding [viii] is the theory that the past predicts the future in order to live in the present – we don’t catch the ball where we see it, but rather, where we expect it to be. As a toddler or robot, picking up a cup can be a colorful mess, but given enough trial and error, the mindful struggle eventually becomes a thoughtless routine.
Our relationship to the models that constitute our mind is both an asset and liability. As independent minds of their own, each Lower-Order Body is its own Higher-Order Conductor, fully equipped with its own, internal models (LOB), built of lower-level cognition and competency. The ability to both influence our LOB, yet resist being overly influenced by them, may require the effort of freewill, but will always thrive through the basic tools of any successful relationship: interest, respect, skepticism, and dialogue.