Tuesday 29 April 2014

Preconditions to Resilience: 1.2 Perception

Frank Zappa once said:
“A mind is like a parachute. It doesn't work if it is not open.”
Paraphrasing Zappa we could say that the same applies to a resilient system: it must be open to "be at work and stay the same". Therefore in my previous post I focused on openness and perception as prerequisites to resilience. There I introduced the three basic services perception is based upon: sensors, quale, and memory. (As discussed elsewhere, antifragility extends resilience with (machine-) learning capability, therefore what we mentioned in our post also applies to computational antifragility.) In this post I continue the discussion providing a practical example: a perception service for a well-known and quite widespread programming language, the so called "C" language.

My discussion will not be a very technical one, and I will do my best to remember that the reader may not be a programmer or an expert in computers altogether! A number of computer-specific concepts will be required though, which will be now introduced as gently and as non-technically as possible (at least, as possible to me!)
The reader accustomed to terms such as "programming language", "computer program", or "programming language variables" may skip this part and go immediately to the next one.


To better enjoy this part the reader is suggested to listen to Frank Zappa's "Call Any Vegetable", kindly provided here.

If you want your computer to do things for you, you need to formulate the intended actions in a way that the computer may understand. Though very fast, computers "speak" a very simple language; that language is so simple that it would be unpractical and in most cases unreasonable to expect a human being to "speak" the same language of a computer.

People who do are often called nerds or in some cases engineers, this second term possibly meaning "persons that talk to engines". (Have you ever seen one such person while s/he calls an engine? Quite moving. Or, at least, the engine often does afterwards.)

As the computer are very good at doing fast very simple things, a first sensible thing to do was to let the computers understand more complex actions. Engineers talked to computers and created "interpreters". As a result of this magic, instead of speaking directly to the machine, people now formulate their commands in some special language. Commands are called "programs" and those special languages are called "programming languages".

(By the way, those "interpreters" were programs too. And yes, I'm using the term "interpreter" for the sake of simplicity.)

Once the first programming languages were created, people could translate their commands — for instance, mathematical formulae — in the simple-and-fast "native language" of computers. Not very surprisingly, that language is often called "machine language". Among the first programming languages that were created there was FORTRAN. FORTRAN in fact stands for FORmula TRANslator. Once the trick was found and its positive returns assessed, other nerds/engineers decided to apply it again and again: as a result, now we have programs written in complex programming languages that are automagically translated into programs in other and simpler programming languages. IF each program is correctly translated and ultimately performs the actions that were intended by the user, then the scheme will work nicely. Yes, it's a big "IF" there.

In mathematical terms: if each "stage" of the above translation process is an isomorphism (namely a function that preserves in the output the validity of the input operations); and if the whole transitive closure is also isomorphic; then the chances are good that the computer will respond to you as you expect it to do.

(In fact even vegetables sometimes are known to respond to you.
By the way, computers are not vegetables. Cabbage is a vegetable. Dig? Need Zappa for that.)

Okay, so now we know more or less what is a program and what is a programming language. We just need another few little ingredients and then we are ready to go with the main course for today — our perception layer. We still need to explain two "little things". One is memory. You might have heard that computers have memories (you know, "my computer has four gigabytes of that!" — "Oh, mine it's better, it's got eight" — that sort of stuff). Memory is were data is stored. If you store things somewhere, it's good to be able to remember where you stored 'em, otherwise you'd end up like me and the stuff on my desk. But that's another story.

When people want to remember where things are stored, they use names. "Where did you put all your pencils?" "Oh those ones? They are in the desk drawer". "Desk drawer" should ideally identify in a clear way where I put those pencils. If there's several drawers in my desk I should be more specific: "they are in the third drawer", for instance. The same applies to computers. Computer memories consists of a long array of "drawers", called "words". If we want to specify where something is stored in a memory word, we must tell its position in the array. That position is called the address of the word. An action for my computer could then be "let me have a look at the content of the memory word at the address 123456"; another one could be "write number 10 in the memory word at the address 123456".

One of the first things that were introduced in programming languages was a better way to refer to those memory words and their content. The engineerds had an idea: let us create names to label certain areas of memory corresponding to memory words. Better, let us allow the program writers to choose their own names. As an example, if I write

int CALEDONIA_MAHOGANIES_ELBOWS;

what I'm actually telling the computer is:

"Hello mr. computer, please reserve a memory word for me; from now on I will refer to said memory word through the name CALEDONIA_MAHOGANIES_ELBOWS; mind that said memory word will be used to store and retrieve integer numbers (or better, computer representations thereof)."

CALEDONIA_MAHOGANIES_ELBOWS is the name of a variable in a programming language. In this case the programming language is called C and the variable is integer. The latter means that the variable can be used in any arithmetic (or Boolean) expression that accepts an integer number as an argument; one such expression is for instance CALEDONIA_MAHOGANIES_ELBOWS = 7; which stores in the memory word reserved to CALEDONIA_MAHOGANIES_ELBOWS the representation of integer number "7". Another such expression is, for instance, CALEDONIA_MAHOGANIES_ELBOWS / 2. If the two expressions follow each other in the order of their appearance here, then the second expression will return the representation of integer number "3".

We can now proceed to our perception service for the C programming language.


As already mentioned, in my previous post I observed how resilience requires some form of reactive or proactive behavior. In turn, those behaviors call for the system to be "open" — in the sense discussed in previous post and here: the system must be able to continuously communicate and “interact with other systems outside of" itself. In what follows the system at hand will be a program written in C using a special tool. This tool in fact allows a number of sensors to be interfaced and corresponding qualia to be associated to programming languages variables. No special memory services are needed, in that variables are automatically preserved by the hardware.

How does a programming language such as C cope with writing an open system? Not that well actually. No standard tool in the language and supporting system provides standard support for this. How do we optimally manage this then? Through what I call reflective variables.

What is a reflective variable? Well, it's a special type of programming language variable. What makes it special is the fact that the value of a reflective variable is not "stable"; rather, it changes dynamically and abruptly. Why? Because a reflective variable is associated with a hardware sensor and stores the values representing the "raw facts" registered by that sensor and converted into corresponding quale. Thus if we assume that a reflective variable, called int temperature, is associated to a thermostat, then temperature would automatically change its values so as to reflect the figures measured by the thermostat. As an example if the thermostat is turned on and measures a temperature of 20°C, a little later reflective variable temperature would be set to integer value "20"; and if at some point the thermostat realizes the temperature has dropped from 20°C to 19°C, then somewhat later temperature would change its value from "20" to "19".

Sensors, reflective variables, and memory provide a C program with a perception service as defined here. This allows a system programmed in C to be "open" — to a certain degree. As an example take a look at the following picture:

The picture shows a program that prints the content of reflective variable int cpu every two seconds. cpu is an integer number that varies between 0 to 100. Said number is in fact the quale that represents the percentage of utilization of the CPU. The Windows task manager is also shown to visualize the actual CPU usage over time.
The actual code of this program and some explanations are given in here and here. The code for the system supporting reflective variable cpu is available on demand.
A more complex example is shown in the following picture:
Here we have two reflective variable, int cpu and int mplayer. By using these two reflective variables a program becomes "open" two context figures: the amount of CPU used (as in previous example) and the state of an instance of the mplayer video player. As we have already described cpu, now we focus on mplayer: the latter is an integer variable whose qualia identify , e.g., whether an mplayer instance has been launched (code: 4); if it is currently being slowed down (code: 2); whether the user requested to abort processing (code: 5); and whether the mplayer instance exited (code: 1). The left-hand window shows the mplayer instance while the right-hand windows shows our exemplary program. The first highlighted area in the left-hand window shows the text produced by mplayer when it detects that "the system is too slow to play" the current video. The second highlighted area in the left-hand window shows the text produced by mplayer when the user type "^C" and aborts the rendering. In the right-hand window we see the cpu growing from 24% to 99 or 100% due to the CPU-intensive rendering task of mplayer. The "Mplayer server:" messages tell when reflective variable mplayer changes its state as well as its new state value and an explanation of the meaning of the state transition.

Further explanations are given here and here. The code for the system supporting reflective variable cpu and mplayer is available on demand.

In this post and the previous one we discussed perception as a first "ingredient" towards resilient systems. Next, we are going to define and exemplify awareness.

As a final message I'd like to express my gratitude to The Resentment Listener, who is kindly initiating me to the Art, System, and Life of Frank Vincent Zappa. (He's my Zappa guru — though not in the sense of Cosmik Debris, mind! "Now what kind of a guru are you anyway?" 😉)

Creative Commons License
Preconditions to Resilience: 1.2 Perception by Vincenzo De Florio is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Permissions beyond the scope of this license may be available at mailto:vincenzo.deflorio@gmail.com.

Monday 14 April 2014

Preconditions to Resilience: 1.1 Perception

Three important preconditions to resilience are perception, awareness, and planning. Perception is key because "What we cannot perceive, we cannot react from—hence we cannot adapt to". Awareness (also called apperception) is key in that it "defines how [the perception data] are accrued, put in relation with past perception, and used to create dynamic models of the “self” and of the “world”." Planning is also fundamental for the purpose of guaranteeing resilience, as it means being able to make effective use of the accrued knowledge to plan a reactive or a proactive response to the onset of change.

This post is a first of a few ones where we shall discuss the above mentioned preconditions. We begin here with perception.

We begin by defining the main term of our discussion. Thus what is perception? In what follows we shall refer to perception as to an open system’s ability to become timely aware of some portion of the context. Underlined words are those that most likely require some explanation:

Open systems
are systems that continuously communicate and “interact with other systems outside of themselves”. Modern electronic devices and cyber-physical systems are typical examples of open systems that more and more are being deployed around us in different shapes and “things”!
Context
is defined by Dey and Abowd as “any information that can be used to characterize the situation of an entity, where an entity can be a person, place, or object. [...] These entities are anything relevant to the interaction between the user and application, including the user and the application.”
Timely aware
puts the accent on the fact that perception of a context change requires performance guarantees. If I become aware of something when the consequences of the event are beyond my sphere of reaction, then it is too late: if a goalkeeper becomes aware of the ball when it's penetrated into the goal, he or she is not doing their job well.

In order to understand perception and related problems I think it is wise to break perception down into three distinct aspects, which I call sensors, quale, and memory.

Sensors
may be considered as the primary interface with the "physical world". Sensors register certain “raw facts” (for instance luminosity, heat, sounds...) and transmit information to the system’s processing and control units—its “brains”. The amount and quality of the sensors and of the sensory processes have a direct link with the "openness" of a system and ultimately with its resilience. Note also that the sensing processes imply a change of representation and thus an encoding. The overall quality of perception strongly depends also on the quality of this encoding process.
Quale
(singular: Qualia) are the system-dependent internal representations of the raw facts registered by the sensors. Also in this case the quality of reactive control -- and thus also the quality of resilience -- strictly depend on the qualia processes. In particular we need to consider the following quality attributes:
  • The fidelity of the representation process. This may be considered as the robustness of an isomorphism between the physical and the cybernetic domain as explained in this paper;
  • The time elapsed between the physical appearance of a raw fact and the corresponding production of a qualia (I call this the qualia manifestation latency);
  • The amount of raw facts that may be reliably encoded as quale per time unit (which I call reflective throughput).
Memory
is the service that persists the quale. Whatever the quality of the sensors and quale services, if the system does not retain information there's no chance that it will make good use of it! Thus the quality of the memory services of perception is another important precondition to overall quality and resilience. We may consider, among others, the following two quality attributes:
  • The average probability that qualia q will be available in memory after time t from its last retrieval (retention probability);
  • How quickly the "control layers" can access the qualia (qualia access time).
        As a digression — don't you find it "magic", so to say, how sometimes you can find a modern truth hidden in an old, old book? I do! And if you want an example of this, have a look at Dante's Divine Comedy, third book, Canto V:
        Apri la mente a quel ch’io ti paleso
        e fermalvi entro; ché non fa scïenza,
        sanza lo ritenere, avere inteso

        (“Open thy mind to that which I reveal,
        And fix it there within; for 'tis not knowledge,
        The having heard without retaining it.”)

        Ain't it amazing how the above three lines closely correspond to sensors, quale, and memory? Magic, isn't it? ;-)
Okay so if we want to talk about resilience we need to discuss perception first; and if we want to discuss perception we need to consider in turn the above three aspects. Kind of fractal, if you ask me. Good! What now? Well, now we can build models of perception and try to use them to have an answer to questions such as how good (better, how open) a system is or which of any two systems is "better" in terms of perception.

As mentioned in another post, resilience is no absolute figure; you can't tell whether a system is better than another one in terms of resilience without considering a reference environment! Well, the same applies to perception. Also in the case of perception quality is the result of a match with a reference environment.

Let me illustrate this through the following example: suppose we have a system, S, that can perceive four context figures — figure 1, 2, 3, and 4. We shall assume that the perception subservices of S are practically perfect, meaning that none of the above mentioned quality attributes (qualia manifestation latency, reflective throughput, retention probability, qualia access time, etc.) translate in limiting factors during a given observation period.

Now we take S and we place it in a certain environment, let's say environment E. Let us suppose that five context figures can change in E: the four ones that are detected by S plus an other one — figure 5.

As a result of this deployment step, several changes take place as time goes on. Let us suppose that during a given observation period the following changes occur:

Time segment s1:
Context figures 1 to 4 change their state.
Time segment s2:
Context figure 1 and context figure 4 change their state.
Time segment s3:
Context figure 4 changes its state.
Time segment s4:
Context figures 1 to 4 change their state.
Time segment s5:
All context figures, namely context figures 1 to 5, change their state.
What is depicted above and was just described is clearly the behavior of a dynamic system, thus it is wise to point this out explicitly by writing "E(t)" instead of just "E".

So what happens to S while we move on from s1 to s5? Well, during s1 and s4 we are in a perfect situation: the system perception and the changes enacted by the environment are perfectly matched. In s2 and s3 the situation is still favorable, though no more optimal: system S is ready to perceive any of the four context figure changes, but changes only affect a subset of those figures. Thus "energy", or attention, is wasted. (Think of an eye that constantly watches something; if we knew that that something will not change its state in the next 5 minutes, we could close the eye and relax for that amount of time 😄

But the real problem occurs during s5: then, the environment produces a change that is not detectable by system S. A dreadful example that comes to mind is that of a man in the middle of a minefield. Short of minesweeping sensors, the man would have no way to detect the presence of a land mine, often with devastating consequences.

What can we learn from even so simplistic a model as the one we've just shown?

A couple of things in particular:

  1. First, that the design of the perception system already defines the "shape" of the design for resilience. In fact if S is static, then it can only be the result of design trade-off carried out considering a generic environment. A worst-case analysis needs to be carried out to evaluate what worst-possible "range" of environmental conditions system S will be prepared to match. This is clearly an elasticity strategy rather than a resilience one. Apart from a limited and bound quality, said strategies imply non negligible development and operating costs and strongly limit the design freedom of the other resilience subsystems — the awareness and planning systems in particular. A better design is therefore that of an S(t) perception system, namely one that is prepared to reconfigure itself so as to "widen" and "shorten" perception depending on the observed environmental conditions. In the future scenarios of cyber-physical societies depicted, e.g., in our post here, a collective cyber-physical thing S(t) could be dynamically built by selecting cyber-physical sensors and qualia services matching the current requirements.
  2. Secondly, by considering how near or how far the system perception gets to the optimal match with the current environmental conditions, it could be possible to provide the "upper layers" of resilience (namely the awareness and planning subsystems) with an indication of the risk of failures. As an example, if we consider again the above example and the five time segments s1, ..., s5, we could observe that s1 and s4 are those that represent the higher risk of an environment "outwitting" the system design; s2 and especially s3 represent more "relaxed" conditions; while s5 is a condition of perception failure. In this paper I have shown how this may be used to define a quantitative measure of the risk of failures.
Next post will be devoted to a particular example: a perception layer for the C programming language.

Creative Commons License
Preconditions to Resilience: 1.1 Perception by Vincenzo De Florio is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Permissions beyond the scope of this license may be available at mailto:vincenzo.deflorio@gmail.com.