Through the Inverted World

January 25, 2017

DSC_2531

Christopher Priest’s 1974 science-fiction novel, Inverted World [1], postulates a moving city called “Earth” that navigates the planet by means of a series of tracks, which must be continually laid for its planned journey. For reasons which become clear as the story develops, the city is striving to reach a shifting point called “optimum”, which always lies ahead of its projected path. The reader begins to understand the peculiar nature of the city through the primary narrator of the novel, Helward Mann. Mann is a surveyor of the city and one of the few who is allowed outside of its tightly managed community. In the course of his journeys Mann encounters weird fields of distortion as he moves further from the city. When traveling behind the city on an escort mission, he discovers that people, objects, and even the landscape around him begin to flatten horizontally, and a strong force pulls him further southward. Time itself stretches and he finds upon his return to the city that the journey (which from his viewpoint has taken days) actually took years. In a later trip to survey the north of the city for the laying of new track, Mann observes that the people and the objects of the world around him appear to become tall and thin. When he returns from this trip, mere days have passed in the city, while to Mann it seemed as though months had gone by. The “optimum” is revealed to be a mysterious power source, one that the city’s founder tapped after a global devastation, and used to create the safe-haven of the city. However, this power-source creates perceptual and genetic distortions for those enshrined in its bubble, and the denizens of the city, who believe that they are the last remnants of Earth, are actually on Earth, but kept enthralled to their tradition through the warped perspective generated by the optimum.

We live in, if not exactly the inverted world of Priest’s fiction, then at least a fractured one, where the perspectives of multiple worlds compete for dominance. It seems that, rather than one power steadfastly laying a path to the future, we have multiple forces laying tracks by altogether different lights, and the trains which follow barrel alongside one another, sometimes colliding with spectacular devastation. Our world has become increasingly complex, stratified, and not always compatible. We require the means to translate across these perspectival dislocations and grasp hold of points of orientation which allow us to steer a vast array of systems. But how are we to do this when the various models we have of the world seem so alienated from one another?

Alienation and Complexity

The systems theorist R. Felix Geyer identifies alienation as an information processing problem which inhibits the subject’s decisional and steering capacity, resulting in a breakdown of social function. He argues that modern forms of alienation are qualitatively distinct from from those of pre-modern societies, which also experienced alienation in forms of psychological, spiritual, or social trauma. While these maladies have not disappeared, what separates the modern world from past ones is what he calls an “accelerating complexity differential”. Increasing internal complexity, or modes of informational processing are required to compensate for the development of external complexity:

“[O]nly internal variety within the system itself can force down the variety due to the system’s environment: the system’s codes must be as highly differentiated as the (potentially system-relevant) variety obtaining in the environment, if the system is to perceive this variety at all, let alone make fully sense of it and be able to steer it. One cannot perceive something one cannot ‘place’. Information that is overly complex relative to the degree of differentiation of the individual’s codes – influenced by his educational level, I.Q., level of emotional development, previous experiences, etc. – goes in one ear and out the other, without registering.” [2]

DSC_2588

Few would deny that we live in an age of increasing complexity. What exactly that means, however, is open to further definition, as it is often thrown out as a platitude. The complexity sciences have attempted to provide the term with more rigorous descriptive and explanatory roles. Arising in the ‘70s from an interdisciplinary assortment of fields (cybernetics, economics, computer science, physics, ecology, biology, artificial intelligence, and philosophy) what we now know by the more formal title of “Complexity Theory” comes in a variety of flavors, and has varied applications depending upon its deployment. Currently, it has strong applications in climate modelling, economic prediction, biology, and the computer sciences [3].

Unsurprisingly, and perhaps a little ironically, the study of Complexity Theory is itself a complex endeavor. In the interest of brevity then, I will not focus on all the species of available, but will use a broad definition of complexity that captures the salient features we are interested in for the purposes of this discussion. In a basic sense, complex systems are those which are said to subsist in a state between order and randomness, and the laws governing the system are relatively hard to describe or predict, but not impossible [4]. Under the computational perspective, the question of whether or not something is “hard to describe” is given a precise definition in relation to the problem of computability [5]. The degree to which we can accomplish this formally may be thought in relation to the famous P vs. NP problem (or polynomial vs. non-deterministic polynomial time, and has to do with whether or not any problem that can be quickly checked by a computer can also be quickly solved by a computer). I will return to this issue later, but let us simply say for now that computational power plays a large role in not only defining what is a complex system, but the degree of sophistication to which our models are capable of accurately representing a complex system.

As mentioned previously, Geyer approaches alienation as an informational problem. If we accept, as it seems we must, the finitude of any one human being to come to terms with the complexity of the contemporary situation, deficits will naturally arise. No one person can hold in their head the complex financial models now used by Wall Street to game the system, or the models used to gauge the various causes, effects, and feedback loops of the Earth’s climate system as it interacts with anthropogenic activity. Where Geyer argues that further individual sophistication is required to deal with external complexity [6], there is an alternative remedy: To increase or augment existing human capacities with external aids. This, however, raises an additional problem — and this is where we must apply some techniques of complexity theory– how do we mediate between these external aids and our limited capacities in a manner that responsibly manages the various levels of sophistication and feedback without further disruptive consequences?

The Human Interface and Computation

Our knowledge is flawed. Evolution does not select for representational accuracy, but representational fitness, given a set of environmental constraints. Consider the case of the Australian Jewel Beetle [7]:

The insect almost faced extinction because the males of the species were hardwired to identify only certain representational triggers which indicated a suitable mate. Certain brown glass beer bottles called “stubbies” happened to resemble these characteristics, but in a way that made them far more desirable to the males’ preferences than the females of the species. Males would swarm carelessly thrown away stubbies, ignoring nearby females and decimating the reproductive cycle.

While human beings have much more robust and reflexive representational systems than Jewel Beetles, our senses are still the byproduct of evolutionary constraints whose cost/benefit parameters result in cognitive biases. Cognitive Scientist Donald D. Hoffman argues for an interface theory of perception, where he likens the given human perceptual system to the generation of icons on a desktop [8]. We see icons of the world through the filter of colors and shape, which present the world to us, but these are not accurate representations of the things in themselves. Or, to put it in Kantian terms, we do not see the world directly, but mediated and constrained by the conditions of our thought.

DSC_2593

Overextending analogies between human cognition and computational systems can be misleading, but nonetheless, there is a useful comparison to be made here: In addition to investigating neurobiology, artificial intelligence is one of the foremost tools we have at the moment for decoding the mysteries of consciousness. Comparisons with AI help us to understand the peculiar limits and functions of human capacities.

Here I would like to bring up the problem of logical omniscience — that is, what does it mean to say that I “know” something? For example, I can know how to play chess without always knowing the most optimal move to make in a game, but can it be said that I really know chess, if I do not always know the logical consequences of every possible move? Similarly, human beings can show proficiency with problems when asked in particular way, but not asked in another way–so, if you were asked to locate the multiple of forty-one times sixty-seven you could easily arrive at two-thousand seven-hundred and forty-seven, but when asked to get the prime factors of two-thousand seven-hundred and forty-seven, it may not be as easy of a problem for you to calculate, despite the fact that the content of the problems is the same. The computer scientist Scott Aaronson suggests that “knowledge” in this case is not really so much a question of whether or not we know the truth or falsity of some state of facts, but “do you possess an internal algorithm, by which you can answer a large (and possibly-unbounded) set of questions of some form?” To this he appends the important proviso, “in a reasonable amount of time?” [9]

Until very recently, computers bested human opponents through sheer brute-force computation of potential moves. In certain problem spaces (checkers, for instance) it could be said that they really are logically omniscient about which move is the best move, rather than possessing a virtual “know-how”. Computers are able to process much larger arrays of data than human beings — though they may not yet have the most efficient algorithms for locating some types of patterns in that data. Deep learning algorithms are changing this, and some would argue, making machine learning operate closer to human-style “algorithms”. While human beings excel at certain types of problems, such as composing literature, making art–and even writing computer algorithms–this may soon change. What I wish to point out here, however, is that the human style has a kind of algorithm which is particularly efficient for calculating some problems — those which are local to the human interface and mesh well with our perceptual sensibilities–but not the kind of complex abstract problems for which machines are able to quickly and efficiently compute solutions. What we will need are representations which can connect algorithms between the two interfaces — the human and the computational model.

Cognitive Mapping and Heuristics

Frederic Jameson’s famous conception of “cognitive mapping” calls for a renewal of the pedagogical function of art — a form of representation which is equal to the abstract totality and complexity of contemporary capitalism. Jameson (writing in 1990) echos Geyer’s concerns: “There comes into being, then, a situation in which we can say that if individual experience is authentic, then it cannot be true; and that if a scientific or cognitive model of the same content is true, then it escapes individual experience.” [10] Much ink has been spilled both advocating for and critiquing cognitive mapping, but everyone agrees that it poses enormous epistemological difficulties. Jameson himself notes that conspiracy theory might be regarded as a degraded form of cognitive mapping – one that reduces the problematics of structural abstraction to a simplistic narrative. A good cognitive map must be able to represent complex and dynamic states with a high-degree of accuracy if it is to avoid the pitfalls of a degraded narrative. This presents two major, but related epistemic challenges: 1. Avoiding reification into a static model, and 2. providing efficient access to that model, so that it becomes available for correction and action.

DSC_2561

At the heart of both of these issues is the problem of reductionism. Squabbling over what it would mean to produce an accurate cognitive map has confined the majority of the conversation around cognitive mapping to a mere abstract possibility, or a site of critique, but what if there was a different approach? Complexity Theory’s suite of tools offers new methods of evaluating, integrating partial theories, and handling reduction honestly. In Complexity Theory reductionism operates on the methodological level to clarify or expose intra or inter-systematic elements which may have not been previously intelligible. Reductions become, then, useful fictions which, while not strictly true, provide a position of orientation within or between larger frameworks. Reduction need not be viewed as wholly negative, so long as we are aware of the constraints it imposes [11].

The term “heuristic” was popularly introduced in discussions of human decision making by the economist, social theorist, and computational scientist Herbert Simon. Broadly, it describes a problem-solving tool which enables solutions through the transformation of one problem into another whose available pattern is more readily grasped. Let me provide a short example:

Imagine you have accidentally exited the New York subway at the wrong stop. You look around to orient yourself and see several landmarks which you recognize. Realizing that your destination is somewhere south of One World Trade Center you begin to head in the direction of the building. As you turn down one street, you look up at the building again, and see that it has changed position relative to you, and that you are now heading further north than you intended. You turn down another street, and the building now appears to be ahead of you. Through this process of observation and correction, you eventually locate your original destination.

This navigation heuristic transforms the problem of locating a specific destination into the problem of gauging one’s position relative to a building. As the example demonstrates, these solutions are not always optimal, but can be efficient given limited resources. Furthermore, because heuristics are often biased towards the limits of their original frame-of-reference (that is to say, reductive), they tend to fail in predictable ways when applied to more general frames. They are not necessarily correct solutions, but ones considered “good enough” to achieve certain results.

Bringing it together by example…

I would now like to turn to one last example as a way of tying together some of the various themes of this presentation. I do not think the territory I have covered really “solves” the problems of alienation that we opened the paper with, but I hope that it suggests some fertile terrain for further investigation. Keeping with the theme of navigation, I would like to mention the work of the anthropologist Edwin Hutchins. Hutchin’s particular concern is with what he calls “distributed cognition”, or the organization of tasks through the coordination of individuals and artifacts to achieve a common goal. He produces a unique comparison of Western and Micronesian navigation systems [12]:

Micronesians primarily use a stellar path to navigate, though there are many other indicators drawn from the environment. The major distinction between Western and the Micronesian system, however, is that while the Western system is dependent upon a spatial representation drawn from a vertical perspective, the Micronesian system supposes a temporal representation with the perspective of the navigator at the center, and the sidereal points and other indicators revolving around their position. The position of the star paths provide points of reference as they proceed across the horizon in a successive and regular manner. The navigator divides the journey into a series of segments based on etaks, or the position of known islands under certain stars. As the journey proceeds, the canoe is regarded as fixed point, while the navigator updates the etak positions based upon how much time has passed. Without significant external storage systems (written language, maps, compasses, etc.), the Micronesian system is significantly more computationally efficient for an unaided human memory. As a consequence of this system, however, the navigator will sometimes generate artificial reference points to aid the efficiency of the calculation. These reference points were misinterpreted by Westerners as actual landmarks, but these so called “phantom-islands” are actually a projection of the journey in the mind of navigator, aiding in the calculation process, and allowing the navigator to process direction by triangulating between the canoe, the star paths and the etak segments. Similar to my previous example of navigating New York, the etak segments are not so much an indication of units of distance between the navigator’s position and intended destination, but a means of arriving at the bearing of the destination [13].

Hutchins is able to frame this comparative study by referring to a generalized computational account that can reconcile differences in the frames of representation. He argues that early Westerners developed systems which were roughly analogous to the etak system, but diverged radically with the invention of external storage systems–maps, compasses, astral charts, etc.– these crystallized a distinct conceptual frame which became more reliant on abstract global models, than local orientation. Now, let me be clear, I do not intend to argue for the superiority of one system or the other, but instead, recommend the utilization of both as necessary for dealing with distinct problems of navigation. We might compare the relatively primitive mapping devices of Westerners to our complex computational models. The Micronesian system may be seen in light of heuristics — an orientation strategy which allows the user to make a decision while subject to bounded knowledge. Whereas models could be characterized as maps which provide the notable objects and relationships within a restricted territory, heuristics are more like a compass which point you in the direction you need to travel. Both are cognitive aids, but heuristics are more amenable to unaided cognition due to their high compression rate. A complex model might require significant external storage to achieve the necessary degree of accuracy, but a heuristic can lead to a feature of that model for a much lower cost.

install-detail2

When shifting between local positions and global structures, we require techniques that can not only mitigate the alienating features of global abstractions, but embrace and coordinate local action with our best understanding of those abstractions. To return to the story of the Inverted World, we need to provide the denizens of the city “Earth” with a means of reconciling their distorted interface with that of the world as it truly is. Models may be able to map the distorted topology of the Inverted World onto our understanding of the real world, but we have to provide new heuristics, or new interfaces that unfold this provincial relation and allow the navigation of the wider world.

A last word about Art and Contemporary Representation

I think the relation between complex models and the human interface poses a unique representational challenge, one which is currently outside the dialectic of contemporary art. As Jameson suggests, the pedagogical role which art once fulfilled has largely been laid by the wayside [14]. Didacticism is now seen to be almost a wholly negative trait, as strategies of presentation, appealing solely to subjective framing have come to take precedence in the contemporary art paradigm. The gesture towards a “contemporary art paradigm” may seem to be contentious, and would take a significantly longer time to elaborate here than I have left, but let me point you to Suhail Malik’s recent talks “On the Necessity of Exiting from Contemporary Art” [15]. His thesis is that contemporary art is not a unified genre, but an ideological totality which is coterminous with the neoliberal world view. This ideology embraces the problem of “indeterminacy”–or the inability to reliably reconcile individual perspectives with any universal perspective–as a replicative strategy which not only favors, but solicits the claims of a “free” subject over any balancing hegemonic claims. Contemporary Art mistakes its inability to represent the totality of the present from any one perspective, for an indexical statement about the incompleteness of the present as a totality, and thus positions itself as an accurate representation of the contemporary situation. However, since it has raised this negational foreclosure to an absolute, it cannot provide any positive orientation in respect of how things are, but only repeat its own inadequacy ad-nauseum. Art becomes nothing more than a frictionless bubble, where the subject ratifies differences without distinction in a continual state of (un-)freedom. Note that this represents the neoliberal consumer subject to a T. In such a state, capitalist trajectories re-assert their power as the only “real” alternative, as no orientation is provided to countermand them. Contemporary art merely identifies a sequence of symptoms, but is incapable of coordinating an alternative. The most damning tell that this is the case, is the fact that market value ultimately winds up asserting the value of Contemporary Art, since it lacks any normative conditions which would posit an alternative criteria for aesthetic valuation.

I’ll now present a deeply truncated (and no doubt flawed) overview of art history: Allegorical models of art were inextricably bound to the metaphysics which conditioned their understanding of the world. They served a pedagogical function by producing representations which keyed the viewer into their role in the cosmic order. Modernism broke down the unitary authority of these models, but was incapable of replacing these master-narratives with suitable representational schema. Notable attempts were made — such as the project of the constructivists which tried to locate a universal geometric language– but these too were felled by the same sword of credulity which inspired the Modernist revolution to begin with. Thereafter Modernism resigned to the state of post-modern skepticism, which leads directly into the interminable reign of Contemporary Art.

hands2

The implications, of what I have presented today, are not to argue for a return to the authority of some unitary metaphysical foundation, but rather the navigation and convergence of our best models. Now it is the case that these representations are partial and distinct from the objects they represent (that is, not strictly true). It is also the case that the totality of knowledge represented by these models is outside the province of any one human being to master. Nonetheless, our sophisticated models not only offer the best potential access to the world outside of us, but also make non-trivial claims about how that world functions, which has implications for the kinds of decisions we make in relation to them. What I am concerned with has to deal with the transfer between models, in particular between our best model of human cognition and that of our other best models, it is therefore part of a pragmatic enterprise, and not a foundational one. This is why I focused on the analogy of the “interface”, the notion that we already represent the world according to a particular “algorithm” or “design”. In a sense what I intend is a kind of allegorical mode of representation, one not built on a metaphysics, but the translation between between models — a sort of User Interface design – relating principles of human perception to abstract models of external phenomena. Heuristics are key to this focus because they present a means of envisioning compression at the experiential level for limited beings. There are difficult computational problems (P vs. NP) in locating the best available algorithms to do this – as I mentioned before – the human mode is still better than computers at finding patterns related to our perceptual interface, and interweaving the local conditions of culture and environment into a system of representation. One of the elements that my reflection on the Micronesian navigation system glossed over, is that the learning of the techniques needed to become a navigator were passed through an oral tradition, embedded in song and culture. We must consider ways of embedding our best models in modes of access that are close to the human interface, tools that grab hold of the tendencies already given, to lead somewhere new. In this view, art becomes a technology, bootstrapping our perceptual abilities to engage our best conceptions of the world as we now know it — helping to organize access for further exploration and action.

Footnotes

1. Priest, Christopher. Inverted world. New York: New York Review Books, 2008.
2. Geyer, R. Felix, “Individual Alienation and Information Processing: a Systems Theoretical Conceptualization“. 1975.
3. Sawyer, R. Keith. Social Emergence: Societies as Complex Systems, New York: Cambridge University Press, 2005. Pg. 2.
4. Ibid., 3.
5. Ladyman, J., Lambert, J., & Wiesner, K. (2013). “What Is a Complex System?” European Journal of Philosophy of Science, 3, 33-67.
6. Geyer, R. Felix. 1975.
7. Hoffman, Donald D. “The Interface Theory of Perception“, 2008.
8. Ibid.
9. Aaronson, Scott. “Why Philosophers Should Care About Computational Complexity“. 2010.
10. Jameson, Fredric. “Cognitive Mapping”. In: Nelson, C./Grossberg, L. [ed]. Marxism and
the Interpretation of Culture
. University of Illinois Press., 347-60.
11. For more on reductionism, its constraints, applications, and potential, see: Wimsatt, William C. Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality. Cambridge and London: Harvard University Press, 2007.
12. Hutchins, Edward. Cognition in the Wild. Cambridge and London: The MIT Press. 1996., 49-115.
13. Ibid.
14. Jameson, Fredric. “Cognitive Mapping”.
15. Malik, Suhail. “On the Necessity of Exiting from Contemporary Art”. London : Urbanomic. Forthcoming.