The Emerging Novel Universe

Emergence “occurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole,” Wikipedia.com – Emergence. Through the “bottom-up” processing of multiple, “lower” level components – each component independently following a common set of “simple” rules – a “higher” level of complexity emerges.[i] A model is an “informative representation of an object, person, or system,” Wikipedia.com – Model. By modeling and manipulating a lower-level’s expression (behavior) of its rules (options), a higher-level model effectively manages emergent properties, as suggested by the Attention-schema theory (AST).[ii]

AST considers animal “consciousness” as a control model of emergence – what the Novel Universe Model calls a Higher-Order Conductor (HOC). This control model is constructed of multiple neurological circuits, or sub-models – what NUM calls Lower-Order Bodies (LOB). A mind is the “capstone” model, a body’s “highest-ordered conductor,” sitting atop a layered “pyramid” of models, each nested with sub-models in a vast network of unconscious processing through competent cognition (self-directed, effective problem-solving).

Consciousness allows for the awareness and adjustment of behavior through mental models (stories), as well as the evaluation of a specific action’s overall success or failure – did Hana pick up the cup, or did it slip from her hand to the floor at the exciting news of a first grandchild? Out of a noisy, irreducible world, the answer arises as a manageable, reducible concept – crap, she dropped the cup! With new information, these same models (LOB) allow Hana (HOC) to update her actions, and find a way to met her preference to hold onto the cup. The thought itself derives from its own LOB – a mental model, or story (pattern of information) she tells herself about who she is as a person. Hana’s a Stoic, someone who keeps her cool, no matter what. Any news, good or bad, won’t make her lose control, at least that’s the behavioral priority she seeks to maintain – her attractor state, the “self” she sees herself as.

Information moves “up,” or emerges, as a lower layer’s data is compressed into a digestible, informational model for the next layer’s use – irreducibility smoothed out into reducibility. For example, Chris, a university’s Museum Director, is informed of an odd pile of petrified wood and straw at a dig site. Is it worth investing the university’s resources for further exploration? Director Chris, as the dig site’s “Higher-Order Conductor,” oversees an array of scientists, students, and other workers, each group, a “Lower-Order Body,” and each member of that group, a contributor (nested LOB) of that Lower-Order Body. Like the sliders of a soundboard’s control panel, the various groups constitute a set of options the Director might adjust, in other words, Chris’ “option space.”[iii] Chris moves the most promising slider to its maximum value, asking all the site’s archaeologists to do a deep dive into the discovery and return with their findings.

Lower-Order Bodies create models that function as “black boxes,” in that their external workings are clear to the HOC user, but their internal workings (created and operated by the LOB’s internal competent cognition) are not. The archaeologists’ multi-reasoned evaluation of the evidence is compressed through its black box of expertise into a simplified story, describing to the Director a collection of ancient beds. Exploring prehistoric cultures is Chris’ childhood dream, but first, the ultimate HOC – the Museum Board – must be persuaded. The Director, functioning as one of the Museum Board’s Lower-Order Bodies, compresses all relevant data into the Director’s black box – a slick PowerPoint presentation.

This cycling back-and-forth between who counts as an LOB or HOC repeats throughout emergent hierarchies of scale, as rising data is processed. The archaeologists’ technically sophisticated reasons were compressed into a salient conclusion (story) for the Director, who took that model, and combined it with other Director-level models to present to the Board. Emergence is more than simply compressing the same data over and over, but integrating the combined meaning of the LOB’s datasets at each level of complexity, creating a useful emergent story for that level’s HOC – the Director’s black box doesn’t just include the archaeologists’ story, but as well, other relevant models, such as the economic and educational benefits a novel discovery might represent for the university.

Focusing on this seesaw relationship between contributor and evaluator, human vision demonstrates how raw signals from the retinas’ countless rods and cones transform an overwhelming sea of granularity into the brain’s useful “sketch” of reality.[iv] We do not see the world as a collection of individual “pixels,” but a smooth, comprehensive picture with some level of meaning, the result of a process that is both awe-inspiring and fraught with potential peril. Through transduction, photo-receptor cells turn light into electrical signals. This cacophony of noise is sent to the retina’s second layer of cells to be organized and compressed into a comprehensible “screen-view” for the third layer – retinal ganglion cells, each type a different Higher-Order Conductor. Filtering the maelstrom, these cells focus on information salient to their particular goals (preferences in action), be it locating the direction of a primary light source, differentiating illumination gradients, recognizing color patterns, etc. Now functioning as Lower-Order Bodies, the ganglia send their promising models to the visual cortex for further processing. Through the LOB-HOC relationship, the V-neural layers[v] transform edges, hues, and luminance into motion, texture, and depth … even recognizable faces begin to pop out from the fusiform gyrus.vi Combined and compressed yet again, the information reveals a comprehensible environment, full of objects, actors, and actions – “high-concept” models assembled from those lower-level characteristics – lines, hues, motion, etc. Finally, the visual cortex’s highly-compressed models assemble into a story for the capstone model of consciousness to digest – its “mental” model. Thus, the mind engages the brain’s narrative: somewhere, a somewhat recognizable thing is doing something that makes some amount of sense. But, does that tale always match reality?

The brain scampers about until it assembles a plausible story to fit the information the LOB models have produced. Because of hidden bias and inherent compression errors, no moment of perception in isolation is completely true, and a skeptical eye is a wise instrument. The only insurance we have against self-delusion is time and an open a mind. Does new information reinforce or reconfigure the models? Only a continuous stream of data might fashion a more useful story, but if we don’t believe contradictory information is even possible, we’ll simply continue to see what we expect.

Information processed by the cortex isn’t akin to a camera recording a video, where the environment’s signal and displayed data is one-to-one – that’s more like the retina’s array of photo-receptor cells. Instead, it’s akin to a holographic sketch pad, constructing each impression top-down, through the lens of contextual expectations and personal history – the moment’s meaning for the HOC and the LOB’s compressed models’ heuristics (mental shortcuts). Despite that jumble of data at the bottom creating a recognizable “event” at the top, repeated compression implies a staggering amount of data-loss, leaving anyone with eyes wired to a brain susceptible to optical illusions.

For instance, bicycle-cop Dave speeds behind panicked Jasper, believing the fleeing burglary suspect is armed. Why? Because of the story drilled into Dave’s head: it’s the law of the jungle out there, where all criminals are likely armed, and self-preservation means everything. This bias places both at risk. The cop might not only see what isn’t there, but also, react with the mindset of a follower, instead of lead with the discernment of authority. Under these conditions, a mere resemblance to a weapon – Jasper’s cellphone pulled from his coat pocket – and the model for the high-concept object “gun,” fires in Dave’s front-loaded brain. From ganglion to V-neurons, between the time the object’s reflected light hits Dave’s retinas, to the time the processed data reaches conscious awareness, the color, reflectivity, and even form of Jasper’s thin cellphone have been cherry-picked to fit the cop’s narrative, now shoehorned into the bulky shape of a deadly pistol. The pursuer briefly, but actually, sees a gun. The blink of an eye is all it takes to end in serious consequences.

Institutional law-enforcement training biases students to see what they are trained to see – tall tales of imminent threats. Additionally, the warrior culture of modern law enforcement frames a warrior-mindset, something of value on the battlefield, but problematic in domestic settings. When a cop’s top priority isn’t exemplifying and symbolizing the law, but self-preservation, even at the cost of another’s life, combat, not service, is the dominant narrative. In such cases, the freewill requirements to change Dave’s preference for safety to a preference for service may lie beyond reach. Fundamentally, it’s not just a question of Dave’s eyesight, level of prejudice, or character, but also, a deeper, systemic issue. It’s about the stories in Dave’s head, and his predisposition to believe in them. Change the culture’s underlining stories, and what Dave sees changes with an evolving mindset, a predisposition to believe a different set of stories. First, address the basic plot. What does it mean to be a cop? What are their preferences, goals, and priories?

For officers of the law whose preference for service supersedes their combat training, they’re predisposed to believe in a different story. Instead of defining themselves through the contemporary lens of an “enforcer” of the law, they, instead, define themselves as a “hero” of the People, in service of their fellow citizens. This priority runs in direct contradiction to the goals of their combat-orientated indoctrination and warrior culture. Pushing against the tidal wave requires the expression of freewill – not an easy job. But, change the culture, and the job gets easier.

When the story isn’t about fighting to survive, but about modeling law-abiding leadership – a hero willing and expected to honorably put their life on the line – protecting and serving the community becomes the top priority. In such cases, the real threat isn’t the possibility that all criminals are armed, but that the officer might lose his composure, his authority – Dave’s no longer in control of the situation or himself. Sure, Jasper might be armed, but that’s what backup and tactical cover is for – why it’s important to keep a cool head, rely on his team, and make the conscious effort to slow down. Serving the community (not himself) becomes the priority when Dave’s primary concern isn’t forcefully taking Jasper into custody, while protecting himself at all costs. Exemplifying the law starts with the stories in Dave’s head. Is Dave the true authority – a compassionate guardian dealing with an unruly ward, a guardian who will not be brought down to a childish level of mutual combat, but is, instead, in control of himself and the situation? Or, is Dave in a fight for his life – a frightened soldier, equally matched against a terrifying enemy in the fog of war? When the story changes, the opposing narrative of combat-first becomes more difficult to express, especially for those already predisposed towards service. Those who act rashly are likely embarrassed by their lack of control, their inability to be the “adult in the room.” Why? Because the cultural expectations have changed. Dave will now be telling himself a different story, “Cops are heroes with a sacred job, not children fighting on the playground.” Words matter, but stories drive behavior.

Our information processing schema’s weakness is also its strength – we need those stories, those mental shortcuts. The sheer amount of data processing required to attend to every aspect of our lives would leave us all paralyzed – how long could we consciously juggle our beating heart, breathing lungs, moving body parts, and all of this all at once? On its own, successfully digesting an entire apple would likely overwhelm any coordinated team of mechanical engineers, biochemists, and gastroenterologists. Going on autopilot to “enter the house after work,” is a single mental model made from various high-concept sub-models: “open the door,” “turn on the light,” “put away the car keys,” “pour the glass of wine,” etc. Furthermore, each sub-model is a compilation of multiple sub-models, each employing specific actions, locations, cues … all the way down to those rods and cones, muscles and tendons carrying out their rudimentary functions (light / dark; squeeze / relax). Little to no conscious awareness is required, moving from one door to the other, and explains how one might find themselves, after a hard day, not remembering how they got from car to couch.

Break down the “simple” act of picking up a cup, and all the various energy levels, differing sequences of actions, and specific timing required to orchestrate the vast concert of nerves, muscles, tendons, joints, and sensory feedback systems becomes a tangled mess of irreducible complexity beyond comprehension. This so-called “easy” task has historically been one of the most challenging set of real-world maneuvers for our mechanical counterparts, while modern progress has come from modeling the learning schema of the mammalian brain – neural networks. Deep-down, it sorta makes sense. Bodily motion is so difficult to effectively model and authentically reproduce because there’s more going on than simply transitioning from point A to point B. In fact, animals have an inherent sense of another’s intent based on their patterns of movement. Consider, what’s more uncanny, reading a chatbot’s human-like text, or watching a robot that merely resembles the form of a dog, yet moves just like one? Both chill to a degree, but seeing such complex, physical behaviors match so closely to the high-concept “dog” model can be spooky.[vii]

Years of reaching out and securing similar objects in a safe, predictable way refines our neurological circuit (AST model) for the specific action-category “pick up the object.” Predictive coding[viii] is the theory that the past predicts the future in order to live in the present – we don’t catch the ball where we see it, but rather, where we expect it to be. As a toddler or robot, picking up a cup was a colorful mess, but given enough trial and error, the mindful struggle eventually becomes a thoughtless routine.

Our relationship to the models that constitute our mind is both an asset and liability. As independent minds of their own, each Lower-Order Body, like anyone, is its own Higher-Order Conductor, fully equipped with its own, internal models (LOB), built of lower-level cognition and competency. The ability to both influence our LOB, yet resist being overly influenced by them, may require the effort of freewill, but will always thrive through the basic tools of any successful relationship: interest, respect, skepticism, and dialogue.

Previous
Previous

* The Novelty of Freewill

Next
Next

* Assembling the Novel Universe