04 March 2012

Black and White and the Grey between

Since some time, I'm thinking about Enriched Games (or Serious Games) in the browser. I like the idea of using SVG and JavaScript for this, or Processing. To bring SVG to life needs a lot of JavaScript code running in the browser, and the same is true for Processing, which is in its JavaScript implementation exactly this, a more or less big library also running in the browser. And looking at this, a big question raises the horizon: which function has to run in the browser, what should be responsibility of the server ?

That this question is not so trivial as it might seems at the first look can be explained considering the following technologies:

OnLive (TM) for example: the principle is to play games on small powered devices in the browser by rendering the images on the server and send them as video stream to the client. The client only notice the interaction of the player, sends the interaction events (and only them) to the server, which renders the next image etc. Advantage: the calculation power of the server is potentially unlimited. Effects are possible crossing the border of all what is possible on a single pc. Let us call this "Black"
On the other side, complex applications like Google Docs (TM) and Saleforce (TM) are complex client side applications. There are a lot more, using HTML5 , giving the impression of a Desktop computer in a single browser. Let us call this "White".

And then, remember the famous approach of OpenCroquet, a Smalltalk based technology: the borders of client and server are blurred, because an OpenCroquet system is a set of more or less equal nodes which exchanges not data, but computation. There is no central computation instance, every node gets the same result by applying the distributed computation. Complicated mechanisms ensure that the results are the same for every node. Advantage: data are distributed, and computation need not that big transfer power. And this is a truly "Grey".

And now, what ? Black, White, Gray ?

The problem with Black is the requirements for bandwidth and latency. The problem with White is the different levels of code: the server code delivers code which runs in the client. This creates a real complicated meta structure.

As a hypothesis, my answer is to serve objects. Real objects, in the sense that they are independent (except to their communication with other objects) and complete entities. This would break the meta structure of the White approach, and using the good things from "Grey". In addition, if you let the objects be distributed opens space for using techniques from the Black approach, if you imagine that a big part of the object calculation may be done at the server.

At the moment, this are just ideas, but in the context of my "Coffee Game" they will become more concrete and reality, I hope :) But I don't want to think only in terms of bandwidth ore language structure, I search for the right way to prepare for the time where we think only in functionality, not in clients, servers, browsers. The age of Cloud and Web is still in the dawn phase, there is plenty room for better things :)



06 November 2011

Random Sequences of Code

"Random Sequences of Code" - so was it told in the movie "I, Robot" with Will Smith. It occured as part of an explanation why Robots may show indeterministic or even human like behaviour some day. I don't want to go deeper into the argument chain presented in the movie. Instead, I want to stay close to the words itself. And I will argue while probabilistic driven systems are the best chance for systems for the future.

The first one is "Random". In a very distant view, randomness means out of control. Many philosophers discussed the nature of randomness. Some say, that randomness is only a matter of perception and knowledge. We perceive a process as a random one if we are not able to explain by which laws the process works. Or in other words: for a given knowledge, there is a level of complexity where every process complexer than this level looks like a random process. In Quantum Theory, randomness gains new value and a new role in the up to this time deterministic view of the universe.

In some domains of technology, there is the term "emergent behavior". Scientists and engineers use this word if they work with systems built from small, independent but connected systems. It is a big fear for any Safety Engineer, because it is open how to assess an emergent system (as I will call it now) related to safety (see this blog post).

But the other part of the citation above is not less interesting: "Sequences of code": In the early days of software, programs were written in a few hundreds lines of code to fit in the small processors. Today, we count many programs in million lines of code. But look at the developers: their view on the software is still just a few hundred lines of code, the overall picture is lost. Well, no offense, who could keep one million lines of code in the head ? The whole software is falling apart in sequences of code.

Although this one is more a problem of perception, the emergent systems could be understand as a system going through sequences of code: every unit is small, has some sequence of code. If the emergent system induces a function, many nodes activate (parallel or in sequence) their small code blocks. Try to write down all code the system run through in every unit as one big program....

My point is this: independent if you have to mange one big block of code or an emergent system, it is not feasible anymore to request to know *exactly* at which state (which line in the code, value of variables) the system is at any time. In some sense, we're sharing a little bit the situation of the physicists in the dawn of the Quantum Theory: the well feeling of being able to predict *exactly* any system for any point in time is lost or weakened.

Humans have introduced an interesting weapon to this uncertainty: probability theory. The single unit is not important any more, only the expectation about the result of the operation of a collection of units is. Details are not important anymore, only the emergent behavior, the effect in sum. Seems to be the hope for the technical future, no ?

Well, to me, it is an open question if probability theory can be a tool to gain control. It depends also from the answer to the question if real randomness exists as constituent brick of the universe.

Assume the Onion Model of Control is true, and randomness exists: in this case, there have to be laws (independent if we found them already or not) that makes the probability theory on its level ("shell") deterministic. There would be laws which have to be the probabilistic counterparts of basic physic laws like energy conservation, Newtons laws etc. They would be the control tool for the universal building block "randomness". The bridge to find these laws could be the Statistical Thermodynamics and the Quantum Theory. Once found, such laws in effect would allow us to control and predict any system behavior, even emergent system behavior.

If randomness doesn't exist, which means all would be deterministic, probability theory would be not a universe built-in way of control, but just a human instrument to handle things which are unknown: instead of looking and control every onion shell of control, probability theory would allow us to ignore a subset of shells. Probability then would describe in a sum the things of some onion shells of control below. Probability theory would be nothing more than a pragmatic approach to the Onion Model of Control, with the danger of roughness.

Lets look at the other case: If the Calculating Universe Model would be true, then it is the question if the probability theory is useless at all. If all is given from the conditions at start and the rules in the system, then randomness can not exist and it would be just a matter of perception. But as in the case above, Probability Theory may give us again the chance to handle emergent or complex systems at least in a pragmatic way until we humans get insight in the background.

In the sum, building technical things based on Probability Theory may give us the biggest chance to control what we built or at least to useful estimate its behavior. How precise and powerful this tool set will be depends on which of the assumptions (Control Model, true randomness or not) are reality. And of course we have all options to develop further our insight of probability, so we should do it :)

At this place, I should remember the reader to the fact (which is very obvious in this post) that my blog is intended more to raise questions, not to write philosophical sound conclusions. That would be matter of a book, of which I think more and more I should write it some day :)










01 November 2011

Get me Control - no ?

As you may know, I am a safety engineer. Therefore, my task is to assess software if its use doesn't induce any unacceptable risks.

Safety engineers base this assessment on two poles: demonstrate, that the software is written by an engineering process which is suited to reduce human errors. In other words, the sources of systematic errors should be reduced. The other pole is the demonstration that the system is correct, by make evident the correct implementation of requirements, the appropriateness of tests and the handle of possible and foreseeable faults and failures. Roughly said, this pole is about to demonstrate that the real concrete system in our hands is observable safe.

As you might guess, this task is much more easy when the software engineering is well developed and if it is possible in principle to know about all functional construction and to control it. In fact, safety is about control. Now it may be clear, why I'm interested in software engineering and the question of control. If the Onion Model of Control is right, we Safety Engineers and system users have no problem at all.

But - what if the Calculating Universe Model is true ? Look at the system biology, the construction of biologic artificial life, the construction of DNA - Cell machines emerging a non calculable behavior. What if the Grown Universe Hypothesis is true ?

Well, I am convinced that the way we work with functional and product safety today is not feasible for the technical or technobiological systems in front of us. I argue that we have to drop the idea of everywhere and every time control. A system can not only be safe because we know every branch of its behavior and therefore know that nothing danger can occur. Powerful and future proof safety is when a system can be full of failures and faults, but the system handles them for itself. See the correction and fault tolerance mechanisms in biology: cells, bodies, populations ! But this needs new insights in a different class of construction principles, which is not available for human kind yet.

Without having a solution in back, my opinion is that we need another approach to engineering, to build systems doing what we need to solve problems. Computer science, understand as science of information, may take us one step further and let us build safe complex computer systems as well as designed life forms.

21 October 2011

Chicken and Egg

In all this dry and serious technical world, the call for human aspects gains more and more attention. If it is really the case that human beeings have to create all these products born out from mathematics, logic and rationality then for sure we have to look at the poor human beeings in this process. How will they feel, will they like what they do, will they be happy ?

Lets switch to a chicken farm. The fate of the chicken is to produce eggs, a thing which seems naturally for them. Chicken dump eggs. So it may be that the chicken is happy about every egg because it is the sense of its life, and it sees this sense of life. And if the egg is there the chicken bothers about the egg, until this tiny little white thing changes to a young yellow chicken. Reached that, the chicken feels good, yes ?

Now, in the nasty modern chicken farms, where the chicken are doomed to sit very close together, nearly impossible to move or to see anything other then steel and other chickens, it is hard to imagine that the chicken will be happy. It cannot see its egg, because in the moment it is released from the body the egg is taken away. The chicken has no contact with its sense of life, and there are not the conditions to produce high class eggs: different food, fresh air, sunlight, movement and normal interaction with all other chickens.

The creative mind may have the idea to produce the best eggs ever by the most luckiest chicken ever. What lucky chicken, they can walk in an english garden, the earth worms are hand selected and at least 5 inches long, room temperature is very convenient, like a warm sunny day in spring about tea time. Also the chicken has not to lay any egg if it chooses to do so. In this lucky fate any egg which we get must be the best egg ever, nor ?

What I want to say is this: the key word is balance. Is all about the product, but also about the humans and the processes (including technology) to build that product. But no factor is more important than the other. Sometimes, I have the impression that in some discussions the human factor is overstressed. All factors are interwoven with the processes and laws, which have to be considered. Neither is software development to set up like a chicken factory, nor like the paradise for self fulfillment of software developers. The balance has to be hold.





16 October 2011

Hello, Siri

Apples Siri is from my point of view the most underestimated thing at iPhone4s. It fact, it could be the start of another change touching everything. Although a good working speech recognition has still some fascination, we are got used to this the last years. Automatic call center and the speech command in my car are well known examples.

There are two other properties of Siri, which are special: using knowledge and using context. You have no longer talk some isolated commands like "Radio" "on". You can talk like you would do with a person - a beginner or child - but a person. And to the second, this "person" has knowledge and context awareness. By using data from location services, database, documents etc. it is possible to provide interpretation of what the user said. It is a first step in a much bigger world, the artificial understanding. When speech recognition is our planet, than artificial understanding is like offering the milky way to us.

For me its not the matter if Siri work exactly like described above or if there are many problems with it. It is the start of a new way to work with artificial intelligence. It is a hypotheses of mine that the universe works probabilistic. I belief that the physics we use today - with solutions of algebraic equations - is just the recognition of an average or of limits. In Quantum Theory and statistical thermodynamics the probabilistic nature is directly observable. And my strong belief is that the brain is - as far it relates to knowledge processing and control - and statistic engine, were one brick of it is association and pattern recognition.

Collecting and correlating knowledge pieces - which we can call documents - and building knowledge from it is the thing which really impresses me on the IBM Watson project. And for this judgement, it is irrelevant that Watson is built on brute force and a collection or hand-made optimizations for special problems. We come closer to a theory describing this all with time.

The important thing is this: Siri (I think) and Watson are knowledge processors in the sense above, and move our view to the probabilistic nature of the universe, also shifting our paradigm what computers can be good for, maybe even the paradigm how to use this huge knowledge and database called Internet.

This is like all things begin: small and decent ;)

12 October 2011

Tell me a story

Today, I've read architectural documents as preparation for a functional safety analysis. Such an analysis is performed to come to an assessment if the given system and its architectural elements fulfill the safety-related requirements. As you might imagine, for doing this work it is helpful to understand the effect chains and the overall operation of the architecture.

So, sitting there and trying to collect the facts spawning out of the pages into my brain, hoping to puzzle together an picture (the picture of software), one thought flashed in my mind: can architecture documents tell stories - and even should they ? (For those who read my last post, lets assume the Constructible Hypothesis is true).

Technical Systems are built from human beings (Well, at least at the moment) Because humans evolve day by day with growing experience causing shifting look at things and moving attitudes these technical systems have history. Humans learn about problems, learn about solutions and in consequence learn to evolve methods and tools.

But not only the creator has history, also the environment = the source of the problems is non-static. Customers have new wishes, technical or environmental constrains change. All this together make it undeniable that a technical product has history. For example, many things on modern ships have their roots in the old wooden sailers. The instrumentation of the airplanes are frozen history of aviation construction.

There is a move citation (I think it was "Big Trouble in Little China" with Kurt Russel): "Thats like all things begin: small and decent". Yes ! This is the power of knowing history: the first versions of things are often simple, not elegant or efficient, but small enough to not covering the basic idea behind a solution. Or better: in the beginning of technical things the intention of the architectural elements will be visible. Going back in history means going to the core part of a technical mechanism, the will of the creator. Going back means to get insight into the basic idea of things.

But there is more. We said technical things evolve. But they evolve not without reason. If technical properties or functions being added, that should be done because of a changed requirements ! So history should tell you a story about sound decisions, why this function or this effect added to the system, and why the old system was not good enough. Again, evolutionary steps have an intention behind them (hopefully !). This is far from trivial, so many team members don't know why software module x or parameter y were added. Look how many things are introduced and killed later on because the knowledge of their introduction got lost.

Now, knowing the history of a technical product seems good. But the question is now can an architectural documentation tell the history ?

Please don't think at the huge version trees of such tools like CVS or Subversion etc. It is not practical to write down a full history log, of course not. But if you see an technical product as a chain of design decisions, controlled by intentions, it might be more clear how to document:

show the basic problem and the basic idea, and show the most important intentions from the historic sequence of intentions shaping the architecture as it is today.

Showing intentions is not necessarily complicated: well chosen names and wording can suggest what the basic idea or problem was. Also, arranging the document along an level of detail gradient can have the effect to present the reader a time trip. AND: show the associated problems !

But my statement is this: just nailing down an architecture as it is in front of you is often not enough. Such documentation may be useful for rebuilding technical things only (schematics for example), but not to understand them. Or to say it much easier: show the human factor in the architecture.

I know, this is no precise plan how to write documents. This may be part of a later post :)


09 September 2011

Agile, Architecture and other things

There are questions about software development I'm really concerned about. These questions raised from my experience being a software engineer, a software architect, requirements engineer and team leader.

One expression of these question is the following problem:
  • How can software engineering become exactly this, engineering ?
This question is not trivial. But first we have to ask, does an answer exist at all ? If there is an answer, it should be possible to compare software engineering with other domains of engineering, say mechanical or architecture domains. Often I try to make mappings from properties of the mechanical engineering to some of software engineering. But the special way of the existence of software - to be not bound to physical resources and to have multiple instances without additional matter or energy - may have impact on the metaphysics of software.

Now, one of this special properties of software is their potentially unlimited effect. Where mechanical devices are limited by matter and energy, software is completely free, not bound, only limited by its runtime, the computer. We learned how fast computer extend their capabilities. In other words, the effects caused by a running software can be very very complex.

This observation raises this question:

  • Is it possible to construct technical systems of any complexity level ? (The Constructible Hypothesis) Or - as contra statement:
  • In general, can technical systems only be grown applying evolutionary principles ? (The Grown Universe Hypothesis)

If the first question has an positive answer, the question above (existence of engineering for software) also has. These both questions also touch another aspect of build things by human: the aspect of control. To construct things means, that the creator has control over every static and dynamic element in the cause chain. The cause chains are arranged by logic reasoning, using basic laws. Here would be the place for an engineering technique for software. But exactly here begins the area of fog, because we know that not every thing can be calculated which means that a full control of any complex system of any complexity level may not possible. If calculability is equal to controllability is an open question.

But if the Constructible Hypothesis would be not true, the only other way of building systems would be to let them grow and arrange solutions by them itself. The evolutionary principle means, to provide just environment conditions (or requirements), and let the system grow. Not necessarily by the system itself like Genetic Algorithms, enhancing a software revision by revision but by human teams is also some kind of growth and evolution. BTW, this would be a place where I see agile and lean methods useful.

If we assume that the Constructible Hypothesis is right, one may ask how we could handle this. One way may be the onion model of control: a few basic laws provide the foundation to build up simple entities and their properties (like atoms from the 4 basic forces and quantum theory, for example). These entities form own rules on their level, from which bigger entities are build with der own rules (molecules, like proteins) and so on (cells, body...). Every level has its own rules, but based on the preceding level, and narrowing the space of possible behaviors. DSLs are operating on the same idea.

Now look at a cellular automata. There are also very basic rules, which generate a special behavior of the automat over time. In the game of life for example, there are patterns coming out which itself show behavior. The "slider" is such a pattern, moving over the 2D grid of the cellular automat. But there is one big difference to the onion model of control: the slider has not its own rules, its behavior is fully and only determined by the basic cellular automata rules ! That means, to control the slider and higher level patterns is only possible by the basic cellular automata rules and the initial conditions. In the onion model of control, you would have constructive possibilities on every level (you could construct different proteins to create a special effect on this level, for example).

From this, it is clear that if the cellular automata model (or let it call after John Neumann the Calculating Universe Model) is true, only the Grown Universe Hypothesis can be true. Otherwise, if the onion model is correct, than both, the Constructible Hypothesis and the Grown Universe Hypothesis have the chance to be true.

So these questions arise:

  • For control behavior of human built systems, is the Onion Model of Control true ? Or
  • For control behavior of human built systems, is the Calculating Universe Model true ?

For me, these are important questions, even if they are seem not that pragmatic :) But it is an important question to investigate what we really can do and achieve in principle in the domain of software engineering. Of course this is no exact analysis, but it should illustrate what I'm thinking about.

If some has a hint about already available work or text from philosophers, computer scientists and so on I would be glad about a pointer :)