Monday 14 April 2014

Preconditions to Resilience: 1.1 Perception

Three important preconditions to resilience are perception, awareness, and planning. Perception is key because "What we cannot perceive, we cannot react from—hence we cannot adapt to". Awareness (also called apperception) is key in that it "defines how [the perception data] are accrued, put in relation with past perception, and used to create dynamic models of the “self” and of the “world”." Planning is also fundamental for the purpose of guaranteeing resilience, as it means being able to make effective use of the accrued knowledge to plan a reactive or a proactive response to the onset of change.

This post is a first of a few ones where we shall discuss the above mentioned preconditions. We begin here with perception.

We begin by defining the main term of our discussion. Thus what is perception? In what follows we shall refer to perception as to an open system’s ability to become timely aware of some portion of the context. Underlined words are those that most likely require some explanation:

Open systems
are systems that continuously communicate and “interact with other systems outside of themselves”. Modern electronic devices and cyber-physical systems are typical examples of open systems that more and more are being deployed around us in different shapes and “things”!
Context
is defined by Dey and Abowd as “any information that can be used to characterize the situation of an entity, where an entity can be a person, place, or object. [...] These entities are anything relevant to the interaction between the user and application, including the user and the application.”
Timely aware
puts the accent on the fact that perception of a context change requires performance guarantees. If I become aware of something when the consequences of the event are beyond my sphere of reaction, then it is too late: if a goalkeeper becomes aware of the ball when it's penetrated into the goal, he or she is not doing their job well.

In order to understand perception and related problems I think it is wise to break perception down into three distinct aspects, which I call sensors, quale, and memory.

Sensors
may be considered as the primary interface with the "physical world". Sensors register certain “raw facts” (for instance luminosity, heat, sounds...) and transmit information to the system’s processing and control units—its “brains”. The amount and quality of the sensors and of the sensory processes have a direct link with the "openness" of a system and ultimately with its resilience. Note also that the sensing processes imply a change of representation and thus an encoding. The overall quality of perception strongly depends also on the quality of this encoding process.
Quale
(singular: Qualia) are the system-dependent internal representations of the raw facts registered by the sensors. Also in this case the quality of reactive control -- and thus also the quality of resilience -- strictly depend on the qualia processes. In particular we need to consider the following quality attributes:
  • The fidelity of the representation process. This may be considered as the robustness of an isomorphism between the physical and the cybernetic domain as explained in this paper;
  • The time elapsed between the physical appearance of a raw fact and the corresponding production of a qualia (I call this the qualia manifestation latency);
  • The amount of raw facts that may be reliably encoded as quale per time unit (which I call reflective throughput).
Memory
is the service that persists the quale. Whatever the quality of the sensors and quale services, if the system does not retain information there's no chance that it will make good use of it! Thus the quality of the memory services of perception is another important precondition to overall quality and resilience. We may consider, among others, the following two quality attributes:
  • The average probability that qualia q will be available in memory after time t from its last retrieval (retention probability);
  • How quickly the "control layers" can access the qualia (qualia access time).
        As a digression — don't you find it "magic", so to say, how sometimes you can find a modern truth hidden in an old, old book? I do! And if you want an example of this, have a look at Dante's Divine Comedy, third book, Canto V:
        Apri la mente a quel ch’io ti paleso
        e fermalvi entro; ché non fa scïenza,
        sanza lo ritenere, avere inteso

        (“Open thy mind to that which I reveal,
        And fix it there within; for 'tis not knowledge,
        The having heard without retaining it.”)

        Ain't it amazing how the above three lines closely correspond to sensors, quale, and memory? Magic, isn't it? ;-)
Okay so if we want to talk about resilience we need to discuss perception first; and if we want to discuss perception we need to consider in turn the above three aspects. Kind of fractal, if you ask me. Good! What now? Well, now we can build models of perception and try to use them to have an answer to questions such as how good (better, how open) a system is or which of any two systems is "better" in terms of perception.

As mentioned in another post, resilience is no absolute figure; you can't tell whether a system is better than another one in terms of resilience without considering a reference environment! Well, the same applies to perception. Also in the case of perception quality is the result of a match with a reference environment.

Let me illustrate this through the following example: suppose we have a system, S, that can perceive four context figures — figure 1, 2, 3, and 4. We shall assume that the perception subservices of S are practically perfect, meaning that none of the above mentioned quality attributes (qualia manifestation latency, reflective throughput, retention probability, qualia access time, etc.) translate in limiting factors during a given observation period.

Now we take S and we place it in a certain environment, let's say environment E. Let us suppose that five context figures can change in E: the four ones that are detected by S plus an other one — figure 5.

As a result of this deployment step, several changes take place as time goes on. Let us suppose that during a given observation period the following changes occur:

Time segment s1:
Context figures 1 to 4 change their state.
Time segment s2:
Context figure 1 and context figure 4 change their state.
Time segment s3:
Context figure 4 changes its state.
Time segment s4:
Context figures 1 to 4 change their state.
Time segment s5:
All context figures, namely context figures 1 to 5, change their state.
What is depicted above and was just described is clearly the behavior of a dynamic system, thus it is wise to point this out explicitly by writing "E(t)" instead of just "E".

So what happens to S while we move on from s1 to s5? Well, during s1 and s4 we are in a perfect situation: the system perception and the changes enacted by the environment are perfectly matched. In s2 and s3 the situation is still favorable, though no more optimal: system S is ready to perceive any of the four context figure changes, but changes only affect a subset of those figures. Thus "energy", or attention, is wasted. (Think of an eye that constantly watches something; if we knew that that something will not change its state in the next 5 minutes, we could close the eye and relax for that amount of time 😄

But the real problem occurs during s5: then, the environment produces a change that is not detectable by system S. A dreadful example that comes to mind is that of a man in the middle of a minefield. Short of minesweeping sensors, the man would have no way to detect the presence of a land mine, often with devastating consequences.

What can we learn from even so simplistic a model as the one we've just shown?

A couple of things in particular:

  1. First, that the design of the perception system already defines the "shape" of the design for resilience. In fact if S is static, then it can only be the result of design trade-off carried out considering a generic environment. A worst-case analysis needs to be carried out to evaluate what worst-possible "range" of environmental conditions system S will be prepared to match. This is clearly an elasticity strategy rather than a resilience one. Apart from a limited and bound quality, said strategies imply non negligible development and operating costs and strongly limit the design freedom of the other resilience subsystems — the awareness and planning systems in particular. A better design is therefore that of an S(t) perception system, namely one that is prepared to reconfigure itself so as to "widen" and "shorten" perception depending on the observed environmental conditions. In the future scenarios of cyber-physical societies depicted, e.g., in our post here, a collective cyber-physical thing S(t) could be dynamically built by selecting cyber-physical sensors and qualia services matching the current requirements.
  2. Secondly, by considering how near or how far the system perception gets to the optimal match with the current environmental conditions, it could be possible to provide the "upper layers" of resilience (namely the awareness and planning subsystems) with an indication of the risk of failures. As an example, if we consider again the above example and the five time segments s1, ..., s5, we could observe that s1 and s4 are those that represent the higher risk of an environment "outwitting" the system design; s2 and especially s3 represent more "relaxed" conditions; while s5 is a condition of perception failure. In this paper I have shown how this may be used to define a quantitative measure of the risk of failures.
Next post will be devoted to a particular example: a perception layer for the C programming language.

Creative Commons License
Preconditions to Resilience: 1.1 Perception by Vincenzo De Florio is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Permissions beyond the scope of this license may be available at mailto:vincenzo.deflorio@gmail.com.

No comments:

Post a Comment