Showing posts with label Software anatomy. Show all posts
Showing posts with label Software anatomy. Show all posts

06 November 2011

Random Sequences of Code

"Random Sequences of Code" - so was it told in the movie "I, Robot" with Will Smith. It occured as part of an explanation why Robots may show indeterministic or even human like behaviour some day. I don't want to go deeper into the argument chain presented in the movie. Instead, I want to stay close to the words itself. And I will argue while probabilistic driven systems are the best chance for systems for the future.

The first one is "Random". In a very distant view, randomness means out of control. Many philosophers discussed the nature of randomness. Some say, that randomness is only a matter of perception and knowledge. We perceive a process as a random one if we are not able to explain by which laws the process works. Or in other words: for a given knowledge, there is a level of complexity where every process complexer than this level looks like a random process. In Quantum Theory, randomness gains new value and a new role in the up to this time deterministic view of the universe.

In some domains of technology, there is the term "emergent behavior". Scientists and engineers use this word if they work with systems built from small, independent but connected systems. It is a big fear for any Safety Engineer, because it is open how to assess an emergent system (as I will call it now) related to safety (see this blog post).

But the other part of the citation above is not less interesting: "Sequences of code": In the early days of software, programs were written in a few hundreds lines of code to fit in the small processors. Today, we count many programs in million lines of code. But look at the developers: their view on the software is still just a few hundred lines of code, the overall picture is lost. Well, no offense, who could keep one million lines of code in the head ? The whole software is falling apart in sequences of code.

Although this one is more a problem of perception, the emergent systems could be understand as a system going through sequences of code: every unit is small, has some sequence of code. If the emergent system induces a function, many nodes activate (parallel or in sequence) their small code blocks. Try to write down all code the system run through in every unit as one big program....

My point is this: independent if you have to mange one big block of code or an emergent system, it is not feasible anymore to request to know *exactly* at which state (which line in the code, value of variables) the system is at any time. In some sense, we're sharing a little bit the situation of the physicists in the dawn of the Quantum Theory: the well feeling of being able to predict *exactly* any system for any point in time is lost or weakened.

Humans have introduced an interesting weapon to this uncertainty: probability theory. The single unit is not important any more, only the expectation about the result of the operation of a collection of units is. Details are not important anymore, only the emergent behavior, the effect in sum. Seems to be the hope for the technical future, no ?

Well, to me, it is an open question if probability theory can be a tool to gain control. It depends also from the answer to the question if real randomness exists as constituent brick of the universe.

Assume the Onion Model of Control is true, and randomness exists: in this case, there have to be laws (independent if we found them already or not) that makes the probability theory on its level ("shell") deterministic. There would be laws which have to be the probabilistic counterparts of basic physic laws like energy conservation, Newtons laws etc. They would be the control tool for the universal building block "randomness". The bridge to find these laws could be the Statistical Thermodynamics and the Quantum Theory. Once found, such laws in effect would allow us to control and predict any system behavior, even emergent system behavior.

If randomness doesn't exist, which means all would be deterministic, probability theory would be not a universe built-in way of control, but just a human instrument to handle things which are unknown: instead of looking and control every onion shell of control, probability theory would allow us to ignore a subset of shells. Probability then would describe in a sum the things of some onion shells of control below. Probability theory would be nothing more than a pragmatic approach to the Onion Model of Control, with the danger of roughness.

Lets look at the other case: If the Calculating Universe Model would be true, then it is the question if the probability theory is useless at all. If all is given from the conditions at start and the rules in the system, then randomness can not exist and it would be just a matter of perception. But as in the case above, Probability Theory may give us again the chance to handle emergent or complex systems at least in a pragmatic way until we humans get insight in the background.

In the sum, building technical things based on Probability Theory may give us the biggest chance to control what we built or at least to useful estimate its behavior. How precise and powerful this tool set will be depends on which of the assumptions (Control Model, true randomness or not) are reality. And of course we have all options to develop further our insight of probability, so we should do it :)

At this place, I should remember the reader to the fact (which is very obvious in this post) that my blog is intended more to raise questions, not to write philosophical sound conclusions. That would be matter of a book, of which I think more and more I should write it some day :)










01 November 2011

Get me Control - no ?

As you may know, I am a safety engineer. Therefore, my task is to assess software if its use doesn't induce any unacceptable risks.

Safety engineers base this assessment on two poles: demonstrate, that the software is written by an engineering process which is suited to reduce human errors. In other words, the sources of systematic errors should be reduced. The other pole is the demonstration that the system is correct, by make evident the correct implementation of requirements, the appropriateness of tests and the handle of possible and foreseeable faults and failures. Roughly said, this pole is about to demonstrate that the real concrete system in our hands is observable safe.

As you might guess, this task is much more easy when the software engineering is well developed and if it is possible in principle to know about all functional construction and to control it. In fact, safety is about control. Now it may be clear, why I'm interested in software engineering and the question of control. If the Onion Model of Control is right, we Safety Engineers and system users have no problem at all.

But - what if the Calculating Universe Model is true ? Look at the system biology, the construction of biologic artificial life, the construction of DNA - Cell machines emerging a non calculable behavior. What if the Grown Universe Hypothesis is true ?

Well, I am convinced that the way we work with functional and product safety today is not feasible for the technical or technobiological systems in front of us. I argue that we have to drop the idea of everywhere and every time control. A system can not only be safe because we know every branch of its behavior and therefore know that nothing danger can occur. Powerful and future proof safety is when a system can be full of failures and faults, but the system handles them for itself. See the correction and fault tolerance mechanisms in biology: cells, bodies, populations ! But this needs new insights in a different class of construction principles, which is not available for human kind yet.

Without having a solution in back, my opinion is that we need another approach to engineering, to build systems doing what we need to solve problems. Computer science, understand as science of information, may take us one step further and let us build safe complex computer systems as well as designed life forms.

21 March 2010

Future of Functional Safe Systems

One of my responsibilities in my job is functional safety. Roughly speaken, functional safety means that running the software does not induce any unaccaptable risk. Typical software applications which are safety relevant are avionics software, train control applications or medical devices. Today, the general approach to gain functional safety for an application is formed part of two basic things:


  1. to demonstrate, that the software works correctly, which means that it works as the requirements has stated, and
  2. that everthing is done to avoid errors rooted in human factors.


There are many safety standards in civil and military area, which prescribe methods to meet the objectives 1) and 2) in a repeatable, predictable and documented way. Many best practices of hardware and software engineering have influenced this standards.


The problem is, that there is an underlaying assumption, which is critical from my point of view: that the hardware/sofware systems are complete deterministic in the sense that any state of the system can be predicted and tested for any time point. We are familiar with this from the classical physics: if an differential equation of a system is given, we can - at least in principle - predict the system state for any time point.


But it is not hard to see that all systems getting more and more complicated. To prove functional safety requires more and more effort, and get more and more gaps. We know, that systems exist with very complicated behaviour: fractal systems or cellular automats are well-kown examples. Often, the only chance to predict their behaviour is to run the system from the beginning (the boudary conditions) up to the point of interest. And we know, that complicated interwoven structures like technical systems emerging today can behave unpredictable or "chaotic". They look like indeterministic systems (even if there is no indeterministic given in a strict mathematical sense).



And at this point, we are lost with our approach to functional safety. So what can we do ? Give up the objective to prove every function or behavior for correctness ! Huh, that would mean to allow errors, even not anticipated ones. Yes ! The future safe fault tolerant system (FSFTS)have to be designed that they can operate with errors, even unpredictable errors. Or in other words: the engineering methodology and practice have to be aligned to the objective that a system is safe not because we know every static and dynamic detail of it, but because errors does not harm the system (and if you see security issues as errors, it could be really interesting....).


Now, can we find conditions or characteristics of such systems, to find an start ? In fact, I always thought about it but I have no resilient conclusions yet. Many experiments have to be done for this. Anyway here are some basic assumptions I can share:


Assume, the state trajectory of a FSFTS is not known in detail for any time point, but it is limited in state space and constrainted by some attractors (well-known example: Lorentz attractor). In fact, a single line in state space will become a band of lines. The state trajectory is looped, but not necessarily closed in itself. Then, an error might move the system trajectory, but within this band, and the attractors ensure that the system will kept in its bounds.


In the next step, assume this sysem state trajectory will be partitioned in trajectories of several subsystems. Now if some attractors would influence some of other subsystems, we would get connected systems which would emerge in the sum functionality. In other words: I would expect that in future system design would mean to engineer attractors system state boundaries and trajectory bands. Engineering would mean not to engineer the system trajectory time point for time point, by cause and effect, but to engineer higher dynamics and (in behaviour) complex systems. I doubt that the available mathematical tools would help us here, I think we need some new basic insights in the area of complex chaotic systems. In this context it is also interesting to the progress and methods of system biology.


Well, we are far away from this, and many questions left. But I personally believe that will be the future of engineering.

10 April 2008

Secure Software - Part I

From time to time, I'm asking myself what secure software really is. Because if I would know what it is, I would be able find methods and tools to design and build such software. How should it look out, which properties should it have, how can I detect it ?

The trivial answer comming in mind immediatly is: 

Secure software is one which never crashes.

That's pretty simple. Is'nt it ? But - what is a "crash", what means "never" ? "Crash" often associated with the well-known "Blue-Screen". Or with the Segmentation Fault. The program is gone away after that, it disappeared. That's the same if I click on "Exit" in a menu, which is no special thing. Well, it is not exactly the same, the further is not intended, where the latter is. So, is a "crash" the situation that a software stops doing what I want ? That description would match another observation of "crash":  The program doesn't respond, the famous endless-loop. At this case, the software still exist, and it does something as well, but either it does not the thing I want, or it does it far too much.  
 
First conclusion: a secure software is one that always do what I want, and so many times I want.

The bad thing is, that I not always now what I should want. In many situations the software must tell me about my possibilities, so that I can think about what I want. Some software is very smart: it belives that it knows what I want and does that in advance, to spare time and stupid questions (from the stupid user). Now - how can a software never crash, because it does exactly what I want all the time, if I don't know what I want or if I can't tell that poor little thing what I want ? May be I don't know what I should want to achive my goal to create something great, may be I don't know the full consequences if I want this or that ? 

Second conclusion: a secure software is one that always do what I want, and so many times I want, and it never let come up any doubt what I should want to achieve my goal.

That sounds great ! But to be honest - the crash or frozen software is not the real bad thing. The real bad, evil catastrophe is that the data are killed ! That's the real reason why I would like to get a sledge hammer if faced to such a situation.....That hurts. Blue Screen or Segmentation Fault often result in bad, corrupted or even lost data. And frozen software is very good to prevent me from saving my work done in many hours. From this point of view, secure software is something which never damage my data in that sense, that they lost their value, their integrety or that they can not be processed further. Either the software can do what I want for all my data, or it never touch them at all.

Final conclusion: a secure software is one that always do what I want, and so many times I want, and which never let come up any doubt what I should want, and which do what I want  for exact all my data completely or never touch them at all.

Isn't it a great result ? But it's not all. More to come later.