Flamingos in the Eduardo Avaroa Andean Fauna National Reserve in Bolivia

Last Updated on November 21, 2024

Table of Contents

Ashby had a very peculiar relationship with self organizing systems. In his seminal paper “Principles of the self-organizing system” (pgs. 115-117) he writes this:

The reason for such a statement is given in another paragraph:

So, according to Ashby, a system (machine) can “self” organize (change state or function) only by some external control parameter. Ashby gives an example explaining his line of thinking:

Now this last statement is crucial. To explain the flaw in Ashby’ thinking we have to draw a block diagram for the above transformations. Ashby’s description from above translates in the block diagram depicted below, where it is the input that defines the state of the system, that is, which of the three transformations will be used by the machine.

In such a case the whole thing is indeed reduced to a single transformation as Ashby describes in the above paragraph.

A random input from the environment will, however, never define the state of the system by itself. A new state of a system is actually, as Shannon explains in chapter 8 of his seminal paper, a function of the current input and the current state as depicted on the diagram below:

Shannon’s description seems also in line with what Ashby has to say about system states in “Introduction to Cybernetics” (S. 2/17) and the machine following a state trajectory that represents a particular behaviour:

The method corresponds, in the study of a dynamic system, to setting it at some initial state and then allowing it to go on, without further interference, through such a series of changes as its inner nature determines.

This case is now very different than the previous one. By allowing the state from a previous step to be the parameter input for the next, the overall transformations (state trajectory) have more variety and are much more interesting.

If we look at the same matrix from Ashby’s example above and, let’s say, start with state fb and keep the input constant at a , we’ll end with a behaviour that oscillates between fb and fc . If we instead start in the same state fb but keep the input constant at b, the behaviour will be completely different. The state will immediately transition to fa in the next step and remain there as long as we keep the input constant at b.

Ashby on machines coupling

In a letter to Jacques Riguet from 29-Oct-1954 Ashby explains his definition of machines and the way they are coupled:

I define the machine (E) with input (I) as a mapping of I x E in E, letting the states E be the output without further mapping into output symbols . I have found this more suitable for my purposes.

If the machine (I x E in E) is to be coupled to another (J x F in F), which may be an observer, then we add, by definition, a special mapping c, of E. in J (or F in I) to show which particular way of coupling is used; but the mapping c is not usually needed unless a coupling is required.

Ashby speaks here about two independent machines with two separate sets of states (E) and (F). He elaborates more on the coupling in “Introduction to Cybernetics” (S. 4/6):

A fundamental property of machines is that they can be coupled. Two or more whole machines can be coupled to form one machine; and any one machine can be regarded as formed by the coupling of its parts, which can themselves be thought of as small, sub-, machines.

… What we want is a way of coupling that does no violence to each machine’s inner working, so that after the coupling each machine is still the same machine that it was before.

For this or this to be so, the coupling must be arranged so that, in principle each machine affects the other only by affecting its conditions, i.e. by, affecting its input. Thus, if the machines are to retain their individual natures after being coupled to form a whole, the coupling must be between the (given) inputs and outputs, other parts being left alone no matter how readily accessible they may be.

Ashby provides some more insight about different types of machine couplings in one of his Journals. Specifically on pages 4854 and 5 where he discusses different types of machine couplings and specifically how they relate to Shannon’s description of the transducer for which he says:

“Shannon’s “transducer” is a more general a mapping of I x E in E x J. The generality, however, is excessive (in spite of the pleasing symmetry!) and must go (see below, 4855).

On the Shannon’s “transducer”

On the next page Ashby spells out why:

The full generality of Shannon’s transducer cannot be used. It is equivalent to making a machine with input + output correspond to a mapping of I x E in E x J. Such a mapping (call it h) generates eight others :

In Ashby’s own writing:

Next, he singles out just the last one on the list:

9 is the one that won’t do, for it makes the relation of “machine’s state” to “output” depend on what value the input is at. Shannon allows this; and it may be all right in his work.
It does not matter if the joining is one way only, but when the coupling is with feedback it means that no mapping is defined, for how e is to be transformed depends on the value of  f, but not in any uniquely defined way, for the mappings that were used before, h and m, are not conditional on the values of i and k – values that they are going to give. The case breaks down.
In the diagram of immediate effects this case means having the output partly under the input control.

Which would, according to Ashby, in the case of a coupling of two machines in a feedback loop lead to an unacceptable situation like this:

instead of this:

At this point it would be, I think, useful to see how Shannon’s description of the transducer match Ashby’s understanding of it.

Shannon’s transducer

Shannon published his legendary Mathematical Theory of Communication in The Bell System Technical Journal back in 1948, eight years before the release of Asby’s Introduction to Cybernetics. Shannon’s description of the transducer (with memory) is different than Ashby’s. It contains only two functions with two variables. The full description given in the paper is as follows:

Note that both functions f and g can be understood and described as complex multi-valued transformations using Ashby’s descriptions from before. The block diagram for Shannon’s transducer can be then depicted like this:

It is immediately obvious that this structure is very different from Ashby’s interpretation in the previous section. Instead of this:

Aahby’s schematics that properly describes Shannon’s transducer should look more like this:

The input is still directly connected to both the “machine” and the “output” as in Ashby’s original description but his description of Shannon’s transducer omits the fact that both “machine” and “output” depend on the state as defined by the “machine subsystem” in the previous step of the state trajectory.

The behaviour of Shannon’s transducer

If we want to complicate things a little bit further, we can add to the simple unit delay another transformation (F) to get a proper “memory function”:

We can now, using Ashby’s notation and transformation rules described in his book Introduction to Cybernetics define three functional transformations in his canonical notation with some arbitrary elements as follows:

Note that this selection of variables is just an example to prove a point. As Ashby teach us, the selection can be practically anything as long as it follows the few transition rules described in the book.

We can now explore two timelines describing two different behaviours of the same transducer:

In the first timeline the input is a constant (a) but, nevertheless, the output of the system changes because the state changes. We can interpret this as if the system is “learning” and refining its “knowledge” by revisiting the same set of input data with an “updated” knowledge state.

In the second timeline example the input sequence is repeating itself but we can see that, because the system is “state defined” and thus dynamical in nature, the state and output do not follow this regularity of the input. Note also that there is nothing “chaotic” in this system. The transformations are completely deterministic. However, even with a such a limited number of variables the system can exhibit some very complicated behaviour.

I’m still not sure why Ashby rejected Shannon’s simple description of the transducer and opted instead for a complex and cumbersome description like the one described in his book on page 49.

In my opinion, Shannon’s elegant framework if understood properly has far reaching consequences beyond its use in communications theory. It introduces multiple fundamental changes in the definition of dynamical systems such as:

  1. If the system is open to the flow of matter and energy but closed to the flow of information (as per Ashby’s definition of a “cybernetic system”) then the control of the system must be internal, thus the distinction between control and controlled system from classical cybernetics is wrong and should be discarded from the discussion about complex, dynamical systems.
  2. As a consequence of 1., strictly speaking, there is no transfer of information between systems either. What is transferred between systems (transducers) are messages, signals, symbols, etc. in the form of mater or energy (wave) structures. The information used for encoding a message by the source transducer (transmitter) may be then re-created by the receiving transducer (receiver) at the destination and this is obviously not the same information.
  3. As a consequence of 2., information and knowledge are also strictly internal to the system. Information, once committed to memory (function F ), is used to build the knowledge (state α) of the system which (knowledge) is in turn used to extract new information (function g ) from external data, as well as to formulate (control) the output (visible behaviour) of the system (function f ).

And one last (but not least) remark: This framework is so simple that can be very easily scaled and formalized in algorithm form (model) and used for simulating complex behaviour of basically any type of dynamical system with memory.

As it may be obvious from this short expose, Ashby’s conclusion that a system can not self-regulate without an externally coupled control system providing a parameter, is based on the incorrect assumption that the state of the system depends exclusively of the input.

Please, feel free to leave a comment, idea or suggestion that would expose any flaw you can find in my thinking. For many years I was looking for data that would prove me wrong, or a for confirmation that a parameter control from the environment is indeed necessary for self organization, but was not able to find any.

Leave a Reply

Your email address will not be published. Required fields are marked *