Table of Contents
A New Kind of Cybernetics
My interest in Cybernetics started in the mid ’70s when introduced with the description of “Analogous systems” from chapter 3.4 of A.Y. Lerner’s book “Fundamentals of Cybernetics”:
The realization that very different structural patterns (mechanical, hydraulic, electrical) have similar behaviour and can be described with identical mathematical patterns forever changed my perspective of the world.
I’m still enjoying skimming through this easy-to-read book and finding out how, even after half a century, most of the ideas in it still hold true, some things have changed with technology, and many of the questions are still unanswered.
However, there was one thing in this book (and in Cybernetics as a whole) I was never comfortable with: the distinction between the control and controlled (sub)systems where the “controlled” system is usually some high energy apparatus with the ability to perform work, while the “control” system use “low energy” signals to both track some key parameters of the “controlled” system as well as to issue commands and control signals to change its state and functions
Being in the engineering line of business, I was able to understand the need for such a structure from the perspective of resolving problems related to building machines that would control other machines, but the majority of the systems Cybernetic is supposed to deal with, organisms and (social) organizations, do not fit this simplified control structure. In such (dynamical) systems that are, IMO, the only worth considering by Cybernetics, it is impossible to separate the control and the controlled elements with such a clear cut. Elements in these systems are distributed through the system and are most often controlled (constrained by) but at the same time also have control (influence) over other (“higher”) elements of the system.
Autopoiesis
Much later I became aware of Maturana and Varela’s notion of autopoiesis and their definition immediately got my attention. They defined their autopoietic system as:
“… a network of productions of components which:
a) through their interactions recursively constitute and realize the network of productions that produced them;
b) constitute the boundaries of the network as components that participate in its constitution and realization; and
c) constitute and realize the network as a composite unity in the space in which they exist.”
Autopoiesis, even if it seems its domain is primarily that of “biological machines” and cognition as a “biological phenomena”, was a great addition to cybernetics. Even if most “cybernetic machines” do not produce or maintain themselves, the paradigm of a network (system) that through the interaction of its components realizes that same network (system), defines the system as a unity within its boundaries, is what I was missing in Cybernetics.
After getting aquatinted with autopoiesis it was immediately obvious to me that its definition can be equally applied to modern engineered and social systems or STS (Socio-Technical Systems). Maturana seemed reluctant to include social and man-made systems in his autopoietic picture but Varela, on the other hand, was very active in extending the notion of autopoiesis to such things as “artificial intelligence”.
Anyway, the autopoietic definition of the system as a “network of productions” provided a better “generative mechanism” for a “scientific explanation” of the “constitution, realization and maintenance of the system as a composite unity“, but I’ve lost the distinction between levels of control in the organization of the system. How is the system maintained as a “composite network of productions” when there is no clear distinction of “who is in charge” of what?
While the traditional cybernetic (linear) description was too simplistic, the autopoietic (networked) definition introduced a new element of chaos that I had to deal with.
The Dynamical System Model
Fortunately, in the mid-’80s, while attending post-graduate courses in Control Engineering, I found a structure that will, from that moment on, become the mainstay of my “Kihbernetic systems philosophy”. In a course about Continual and Discrete Tracking Systems, we were introduced to a structure like the one presented in the picture below (scanned from my old notebook).
The context was about solving problems with the analysis of complex, non-linear dynamical systems with memory, and the purpose of the lecture was to show a method where, by dividing the system into two subsystems, the “memory” part (function F) and the part without memory (functions f & g) would allow the separate identification of the two subsystems and consequently simplify the analysis of the system as a whole.
I think it is already obvious from this schematic, but just for the sake of clarity: The notation represents the following (physical) variables: u(t) is the input signal (vector) to the system, and y(t) is the output, x(t) represents the current state of the system stored in the memory and generated from the internal signal v(t), produced in the part of the system without memory.
The first thing I did was change the shape of this block schematic in order to get the “untangled” version as in the pictures below. The moment I saw it like this was the start of a journey I’m still on.
As stated above, for all vectors (arrows) in this system there was a real physical equivalent (input, output, state), except for v(t) which was simply identified as the “state preparation” vector in the lecture. Everything fell in the right place the moment I realized v(t) must be the (novel) information used to build and change the system’s knowledge state maintained in the system’s memory (a “historical” integration function F).
When seen in this form and context it is immediately obvious that a system exposed to some input (data) may react or behave (function g) according to its present state (knowledge of the situation) to par the input. The system may at the same time also analyze the input data (function f) to extract any available (new) information by comparing the received data with the same (current) knowledge state. This (novel) information or, (to paraphrase G. Bateson) “any difference which makes a difference in some later event” is then committed to memory (function F) where it will be used (integrated with existing knowledge) to update the knowledge state of the system in a cyclic (recursive) process of learning.
We use our current knowledge to extract information from data and then we use this new information to update our knowledge.
Note that this representation also fits perfectly with the theory of Autopoiesis. Learning is an autopoietic (recursive) process confined within the system, much like other biological processes of structural self-production (growth) and maintenance, while work processes carry the structural coupling of the system with its niche environment. This is the allopoietic part of the system using resources from the environment to produce something else (behaviour, tools, symbols, waste) than the system itself (the output).
Shannon’s transducer
I was never completely satisfied with the widely accepted notion of “transfer of information” and the fact that many authors treat information and knowledge as commodities that can be stored, transferred from place to place and used as needed. Compared with matter and energy, information is usually assigned a “special capability”, of being able to exist on more than one place at the same time.
It is true that if you give me some information you will not miss it yourself like you would be missing some other thing you gave me. We can both have and use the “same” information.
But, is it really the same information? If they are the same, how can I then sometimes misunderstand the information you “gave me”?
The only possible answer to this question is that what is transferred between systems is not information. What is exchanged are mere material or energetic(al) artifacts (products, structures), and they definitely can NOT be in two places at the same time. They can be copied so each of us can have one, but entropy (noise, degradation) is obviously affecting all these structures during the exchange, copying, and storage process.
There is really nothing more (or new) to say about it than what Shannon presented in his 1948 paper except for a few words about how his theory is frequently misunderstood.
Shannon’s “Information” theory has often attributed the deficiency of not being able to accommodate for “novel information” or unforeseen events. First of all, he named his theory Communication, not Information Theory and he specifically declared in the introduction of his paper that the theory is not concerned with the meaning of the message, just with the quantitative aspect of it (emphasis mine):
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the message have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.
Shannon also did something else in that seminal paper. In section 8 he provides a description of the transducer which is the common name for the transmitter that encodes the information into a message and the receiver which uses the inverse process to decode the information from the message. Because of its importance for the discussion, the full text of his description is provided in this screengrab of the relevant section in the paper:
If we put this description in a format of a block diagram we get the system from the drawing below which is remarkably similar to what we discussed above.
The block z-1 is just a unit delay function denoting the fact that the state variable αn+1 from the current step, is applied as the state αn of the system in the next step.
In the picture on the right (as per Shannon’s suggestion) we added an integrative “memory” function F and a new variable zn (information) to account for the difference between a simple transducer and a dynamical one using (historical) knowledge and information for learning.
Conclusion
And with that, I completed the circle. I had a consistent framework tying together a number of fundamental concepts of the cybernetic (and systems) theory. I had:
- A definition and a model for a dynamical system closed to information but open to the exchange of matter and energy as defined by Ashby;
- A better definition and place for Information and Knowledge as internal variables of the dynamical system with memory;
- A model that could describe autopoietic, learning and self-regulating functions in complex dynamical systems (biological, social, mechanical, etc.)
During the years I used this simple model to make sense of different situations in very different domains and it never failed me. I have yet to find an area where it can’t be used to explain complex systemic issues.