Disintegrated Parts


#software-development #philosophy

If a complex system is reduced to a minimum set of logical rules governing its functioning, then how would that impact the demands placed on ones cognitive abilities while interacting with said system?

In this writing I’m treating the complex system as a rather abstract philosophical concept, even though the underlying inspiration arose from the field of software engineering.

In the assesment of the cognitive load of a given complex system I adhere to a distinction between the reliance on memorization and comprehension. Through this abstraction this I try to reflect the different modus operandi between people. Some seem to be exceptionally receptive to certain structures and quickly memorize the way they are composed. Others instead seem to be reliant on their capabilities for logical reasoning to deduce information about the functioning of these structures on the fly.

It cannot be that a single complex system is solely skewed to either of these dimensions. If a complex system is to function consistently there must be a common set of rules governing its functioning. Though it would be amazing if we are capable of reducing our understanding of everything into a grand unification theory, having such theory would not be relevant to most aspects of our lives as we perceive it through what we generally define as being reality.

In order to meaningfully connect the dots we must be able to reason through different levels of abstractions. The existence of a grand unification theory will not save us from the experience of unexplainable phenomenon. If we are to explain these experiences through such general theory the information density about said experience would simply be too big for any meaningful use. What would help instead is the use of a different abstraction level. As for explanations about the rules seemingly governing the functioning of our world one could decide to use different abstraction levels among which are Newtonian mechanics, general relativity and quantum physics. These all provide a different abstraction level at which we can reason about our surroundings, and though not all of these are fully consistent with one another, they are all still very much relevant to this day.

It’s Richard Feynman whom seemed to be exceptionally capable traversing through all these different levels of abstractions, and was able to go from a high philosophical overview down into the nitty gritty details of a particular phenomenon.

It would then be the abstraction level(s) through which we approach the complex system that defines the distribution of the cognitive load over the abilities of memorization and comprehension. The difficult aspect herein is that at certain abstraction levels some things still will not make sense and cannot be logically deduced. In such case one can either adopt a different abstraction level, or take the exhibited behaviour for granted and memorize the relationship between the structure of the system and its manifested behaviour, however irrational it may seem.

In order to reduce the cognitive load of such system I am convinced that the behaviour of a complex system should be logically deducible from the structure of the system itself. This much rather than the other way around since logic can be memorized, though the illogical (which therefore must be remembered) cannot be rationalized.

There is an interesting side effect caused by a mismatch between the complexity embodied by a system and the abstraction levels used to reason about the behaviour exhibited by the system. Despite the given that at a certain fundamental level all behaviour exhibited by the system must adhere to a few basic rules, the actual complexity might not be so easily understandable. The interesting thing here is that an incorrectly applied abstraction has the potential to provide an additional level of obscurity rather than an improvement of ones understanding.

At a certain level of complexity one must however generalize their understanding about a system, for otherwise it would become impossible to manage and reason through the manifested behaviour. The difficulty in this process of generalization is choosing the right abstraction. Too specific and it does not help. Too generic and it cannot possibly be used. At the same time the shape of the abstraction is hightly dependent on the specific complexity that needs to be embodied.

Ultimately the creation of an abstraction involves a generalization with the goal to minimize the amount of information required to appropriately deal with certain phenomena.

This last point is incredibly well visualized through this series covering computational fluid dynamics, where the author goes through the numerous generalizations that have been made before one is able to understand fluid dynamics on a level that required for a computer to be able to work with it.

At the time of writing I’m still unsure how it can be that the combination of various perfectly logical constructs may lead to the obscuring of exhibited behaviour. If I had to make a wild guess I would hypothesize that this is caused by a mismatch between the information density about, and the exhibited behaviour of a complex system. Ultimately one would prefer to reason about a single dimension at a time, thus being able to reduce the problem to a causal relationship dependent on a single property.


No webmentions were found.