06 November 2011

Random Sequences of Code

"Random Sequences of Code" - so was it told in the movie "I, Robot" with Will Smith. It occured as part of an explanation why Robots may show indeterministic or even human like behaviour some day. I don't want to go deeper into the argument chain presented in the movie. Instead, I want to stay close to the words itself. And I will argue while probabilistic driven systems are the best chance for systems for the future.

The first one is "Random". In a very distant view, randomness means out of control. Many philosophers discussed the nature of randomness. Some say, that randomness is only a matter of perception and knowledge. We perceive a process as a random one if we are not able to explain by which laws the process works. Or in other words: for a given knowledge, there is a level of complexity where every process complexer than this level looks like a random process. In Quantum Theory, randomness gains new value and a new role in the up to this time deterministic view of the universe.

In some domains of technology, there is the term "emergent behavior". Scientists and engineers use this word if they work with systems built from small, independent but connected systems. It is a big fear for any Safety Engineer, because it is open how to assess an emergent system (as I will call it now) related to safety (see this blog post).

But the other part of the citation above is not less interesting: "Sequences of code": In the early days of software, programs were written in a few hundreds lines of code to fit in the small processors. Today, we count many programs in million lines of code. But look at the developers: their view on the software is still just a few hundred lines of code, the overall picture is lost. Well, no offense, who could keep one million lines of code in the head ? The whole software is falling apart in sequences of code.

Although this one is more a problem of perception, the emergent systems could be understand as a system going through sequences of code: every unit is small, has some sequence of code. If the emergent system induces a function, many nodes activate (parallel or in sequence) their small code blocks. Try to write down all code the system run through in every unit as one big program....

My point is this: independent if you have to mange one big block of code or an emergent system, it is not feasible anymore to request to know *exactly* at which state (which line in the code, value of variables) the system is at any time. In some sense, we're sharing a little bit the situation of the physicists in the dawn of the Quantum Theory: the well feeling of being able to predict *exactly* any system for any point in time is lost or weakened.

Humans have introduced an interesting weapon to this uncertainty: probability theory. The single unit is not important any more, only the expectation about the result of the operation of a collection of units is. Details are not important anymore, only the emergent behavior, the effect in sum. Seems to be the hope for the technical future, no ?

Well, to me, it is an open question if probability theory can be a tool to gain control. It depends also from the answer to the question if real randomness exists as constituent brick of the universe.

Assume the Onion Model of Control is true, and randomness exists: in this case, there have to be laws (independent if we found them already or not) that makes the probability theory on its level ("shell") deterministic. There would be laws which have to be the probabilistic counterparts of basic physic laws like energy conservation, Newtons laws etc. They would be the control tool for the universal building block "randomness". The bridge to find these laws could be the Statistical Thermodynamics and the Quantum Theory. Once found, such laws in effect would allow us to control and predict any system behavior, even emergent system behavior.

If randomness doesn't exist, which means all would be deterministic, probability theory would be not a universe built-in way of control, but just a human instrument to handle things which are unknown: instead of looking and control every onion shell of control, probability theory would allow us to ignore a subset of shells. Probability then would describe in a sum the things of some onion shells of control below. Probability theory would be nothing more than a pragmatic approach to the Onion Model of Control, with the danger of roughness.

Lets look at the other case: If the Calculating Universe Model would be true, then it is the question if the probability theory is useless at all. If all is given from the conditions at start and the rules in the system, then randomness can not exist and it would be just a matter of perception. But as in the case above, Probability Theory may give us again the chance to handle emergent or complex systems at least in a pragmatic way until we humans get insight in the background.

In the sum, building technical things based on Probability Theory may give us the biggest chance to control what we built or at least to useful estimate its behavior. How precise and powerful this tool set will be depends on which of the assumptions (Control Model, true randomness or not) are reality. And of course we have all options to develop further our insight of probability, so we should do it :)

At this place, I should remember the reader to the fact (which is very obvious in this post) that my blog is intended more to raise questions, not to write philosophical sound conclusions. That would be matter of a book, of which I think more and more I should write it some day :)










01 November 2011

Get me Control - no ?

As you may know, I am a safety engineer. Therefore, my task is to assess software if its use doesn't induce any unacceptable risks.

Safety engineers base this assessment on two poles: demonstrate, that the software is written by an engineering process which is suited to reduce human errors. In other words, the sources of systematic errors should be reduced. The other pole is the demonstration that the system is correct, by make evident the correct implementation of requirements, the appropriateness of tests and the handle of possible and foreseeable faults and failures. Roughly said, this pole is about to demonstrate that the real concrete system in our hands is observable safe.

As you might guess, this task is much more easy when the software engineering is well developed and if it is possible in principle to know about all functional construction and to control it. In fact, safety is about control. Now it may be clear, why I'm interested in software engineering and the question of control. If the Onion Model of Control is right, we Safety Engineers and system users have no problem at all.

But - what if the Calculating Universe Model is true ? Look at the system biology, the construction of biologic artificial life, the construction of DNA - Cell machines emerging a non calculable behavior. What if the Grown Universe Hypothesis is true ?

Well, I am convinced that the way we work with functional and product safety today is not feasible for the technical or technobiological systems in front of us. I argue that we have to drop the idea of everywhere and every time control. A system can not only be safe because we know every branch of its behavior and therefore know that nothing danger can occur. Powerful and future proof safety is when a system can be full of failures and faults, but the system handles them for itself. See the correction and fault tolerance mechanisms in biology: cells, bodies, populations ! But this needs new insights in a different class of construction principles, which is not available for human kind yet.

Without having a solution in back, my opinion is that we need another approach to engineering, to build systems doing what we need to solve problems. Computer science, understand as science of information, may take us one step further and let us build safe complex computer systems as well as designed life forms.