Showing posts with label Information processing. Show all posts
Showing posts with label Information processing. Show all posts

04 March 2012

Black and White and the Grey between

Since some time, I'm thinking about Enriched Games (or Serious Games) in the browser. I like the idea of using SVG and JavaScript for this, or Processing. To bring SVG to life needs a lot of JavaScript code running in the browser, and the same is true for Processing, which is in its JavaScript implementation exactly this, a more or less big library also running in the browser. And looking at this, a big question raises the horizon: which function has to run in the browser, what should be responsibility of the server ?

That this question is not so trivial as it might seems at the first look can be explained considering the following technologies:

OnLive (TM) for example: the principle is to play games on small powered devices in the browser by rendering the images on the server and send them as video stream to the client. The client only notice the interaction of the player, sends the interaction events (and only them) to the server, which renders the next image etc. Advantage: the calculation power of the server is potentially unlimited. Effects are possible crossing the border of all what is possible on a single pc. Let us call this "Black"
On the other side, complex applications like Google Docs (TM) and Saleforce (TM) are complex client side applications. There are a lot more, using HTML5 , giving the impression of a Desktop computer in a single browser. Let us call this "White".

And then, remember the famous approach of OpenCroquet, a Smalltalk based technology: the borders of client and server are blurred, because an OpenCroquet system is a set of more or less equal nodes which exchanges not data, but computation. There is no central computation instance, every node gets the same result by applying the distributed computation. Complicated mechanisms ensure that the results are the same for every node. Advantage: data are distributed, and computation need not that big transfer power. And this is a truly "Grey".

And now, what ? Black, White, Gray ?

The problem with Black is the requirements for bandwidth and latency. The problem with White is the different levels of code: the server code delivers code which runs in the client. This creates a real complicated meta structure.

As a hypothesis, my answer is to serve objects. Real objects, in the sense that they are independent (except to their communication with other objects) and complete entities. This would break the meta structure of the White approach, and using the good things from "Grey". In addition, if you let the objects be distributed opens space for using techniques from the Black approach, if you imagine that a big part of the object calculation may be done at the server.

At the moment, this are just ideas, but in the context of my "Coffee Game" they will become more concrete and reality, I hope :) But I don't want to think only in terms of bandwidth ore language structure, I search for the right way to prepare for the time where we think only in functionality, not in clients, servers, browsers. The age of Cloud and Web is still in the dawn phase, there is plenty room for better things :)



06 November 2011

Random Sequences of Code

"Random Sequences of Code" - so was it told in the movie "I, Robot" with Will Smith. It occured as part of an explanation why Robots may show indeterministic or even human like behaviour some day. I don't want to go deeper into the argument chain presented in the movie. Instead, I want to stay close to the words itself. And I will argue while probabilistic driven systems are the best chance for systems for the future.

The first one is "Random". In a very distant view, randomness means out of control. Many philosophers discussed the nature of randomness. Some say, that randomness is only a matter of perception and knowledge. We perceive a process as a random one if we are not able to explain by which laws the process works. Or in other words: for a given knowledge, there is a level of complexity where every process complexer than this level looks like a random process. In Quantum Theory, randomness gains new value and a new role in the up to this time deterministic view of the universe.

In some domains of technology, there is the term "emergent behavior". Scientists and engineers use this word if they work with systems built from small, independent but connected systems. It is a big fear for any Safety Engineer, because it is open how to assess an emergent system (as I will call it now) related to safety (see this blog post).

But the other part of the citation above is not less interesting: "Sequences of code": In the early days of software, programs were written in a few hundreds lines of code to fit in the small processors. Today, we count many programs in million lines of code. But look at the developers: their view on the software is still just a few hundred lines of code, the overall picture is lost. Well, no offense, who could keep one million lines of code in the head ? The whole software is falling apart in sequences of code.

Although this one is more a problem of perception, the emergent systems could be understand as a system going through sequences of code: every unit is small, has some sequence of code. If the emergent system induces a function, many nodes activate (parallel or in sequence) their small code blocks. Try to write down all code the system run through in every unit as one big program....

My point is this: independent if you have to mange one big block of code or an emergent system, it is not feasible anymore to request to know *exactly* at which state (which line in the code, value of variables) the system is at any time. In some sense, we're sharing a little bit the situation of the physicists in the dawn of the Quantum Theory: the well feeling of being able to predict *exactly* any system for any point in time is lost or weakened.

Humans have introduced an interesting weapon to this uncertainty: probability theory. The single unit is not important any more, only the expectation about the result of the operation of a collection of units is. Details are not important anymore, only the emergent behavior, the effect in sum. Seems to be the hope for the technical future, no ?

Well, to me, it is an open question if probability theory can be a tool to gain control. It depends also from the answer to the question if real randomness exists as constituent brick of the universe.

Assume the Onion Model of Control is true, and randomness exists: in this case, there have to be laws (independent if we found them already or not) that makes the probability theory on its level ("shell") deterministic. There would be laws which have to be the probabilistic counterparts of basic physic laws like energy conservation, Newtons laws etc. They would be the control tool for the universal building block "randomness". The bridge to find these laws could be the Statistical Thermodynamics and the Quantum Theory. Once found, such laws in effect would allow us to control and predict any system behavior, even emergent system behavior.

If randomness doesn't exist, which means all would be deterministic, probability theory would be not a universe built-in way of control, but just a human instrument to handle things which are unknown: instead of looking and control every onion shell of control, probability theory would allow us to ignore a subset of shells. Probability then would describe in a sum the things of some onion shells of control below. Probability theory would be nothing more than a pragmatic approach to the Onion Model of Control, with the danger of roughness.

Lets look at the other case: If the Calculating Universe Model would be true, then it is the question if the probability theory is useless at all. If all is given from the conditions at start and the rules in the system, then randomness can not exist and it would be just a matter of perception. But as in the case above, Probability Theory may give us again the chance to handle emergent or complex systems at least in a pragmatic way until we humans get insight in the background.

In the sum, building technical things based on Probability Theory may give us the biggest chance to control what we built or at least to useful estimate its behavior. How precise and powerful this tool set will be depends on which of the assumptions (Control Model, true randomness or not) are reality. And of course we have all options to develop further our insight of probability, so we should do it :)

At this place, I should remember the reader to the fact (which is very obvious in this post) that my blog is intended more to raise questions, not to write philosophical sound conclusions. That would be matter of a book, of which I think more and more I should write it some day :)










01 November 2011

Get me Control - no ?

As you may know, I am a safety engineer. Therefore, my task is to assess software if its use doesn't induce any unacceptable risks.

Safety engineers base this assessment on two poles: demonstrate, that the software is written by an engineering process which is suited to reduce human errors. In other words, the sources of systematic errors should be reduced. The other pole is the demonstration that the system is correct, by make evident the correct implementation of requirements, the appropriateness of tests and the handle of possible and foreseeable faults and failures. Roughly said, this pole is about to demonstrate that the real concrete system in our hands is observable safe.

As you might guess, this task is much more easy when the software engineering is well developed and if it is possible in principle to know about all functional construction and to control it. In fact, safety is about control. Now it may be clear, why I'm interested in software engineering and the question of control. If the Onion Model of Control is right, we Safety Engineers and system users have no problem at all.

But - what if the Calculating Universe Model is true ? Look at the system biology, the construction of biologic artificial life, the construction of DNA - Cell machines emerging a non calculable behavior. What if the Grown Universe Hypothesis is true ?

Well, I am convinced that the way we work with functional and product safety today is not feasible for the technical or technobiological systems in front of us. I argue that we have to drop the idea of everywhere and every time control. A system can not only be safe because we know every branch of its behavior and therefore know that nothing danger can occur. Powerful and future proof safety is when a system can be full of failures and faults, but the system handles them for itself. See the correction and fault tolerance mechanisms in biology: cells, bodies, populations ! But this needs new insights in a different class of construction principles, which is not available for human kind yet.

Without having a solution in back, my opinion is that we need another approach to engineering, to build systems doing what we need to solve problems. Computer science, understand as science of information, may take us one step further and let us build safe complex computer systems as well as designed life forms.

16 October 2011

Hello, Siri

Apples Siri is from my point of view the most underestimated thing at iPhone4s. It fact, it could be the start of another change touching everything. Although a good working speech recognition has still some fascination, we are got used to this the last years. Automatic call center and the speech command in my car are well known examples.

There are two other properties of Siri, which are special: using knowledge and using context. You have no longer talk some isolated commands like "Radio" "on". You can talk like you would do with a person - a beginner or child - but a person. And to the second, this "person" has knowledge and context awareness. By using data from location services, database, documents etc. it is possible to provide interpretation of what the user said. It is a first step in a much bigger world, the artificial understanding. When speech recognition is our planet, than artificial understanding is like offering the milky way to us.

For me its not the matter if Siri work exactly like described above or if there are many problems with it. It is the start of a new way to work with artificial intelligence. It is a hypotheses of mine that the universe works probabilistic. I belief that the physics we use today - with solutions of algebraic equations - is just the recognition of an average or of limits. In Quantum Theory and statistical thermodynamics the probabilistic nature is directly observable. And my strong belief is that the brain is - as far it relates to knowledge processing and control - and statistic engine, were one brick of it is association and pattern recognition.

Collecting and correlating knowledge pieces - which we can call documents - and building knowledge from it is the thing which really impresses me on the IBM Watson project. And for this judgement, it is irrelevant that Watson is built on brute force and a collection or hand-made optimizations for special problems. We come closer to a theory describing this all with time.

The important thing is this: Siri (I think) and Watson are knowledge processors in the sense above, and move our view to the probabilistic nature of the universe, also shifting our paradigm what computers can be good for, maybe even the paradigm how to use this huge knowledge and database called Internet.

This is like all things begin: small and decent ;)

09 September 2011

Agile, Architecture and other things

There are questions about software development I'm really concerned about. These questions raised from my experience being a software engineer, a software architect, requirements engineer and team leader.

One expression of these question is the following problem:
  • How can software engineering become exactly this, engineering ?
This question is not trivial. But first we have to ask, does an answer exist at all ? If there is an answer, it should be possible to compare software engineering with other domains of engineering, say mechanical or architecture domains. Often I try to make mappings from properties of the mechanical engineering to some of software engineering. But the special way of the existence of software - to be not bound to physical resources and to have multiple instances without additional matter or energy - may have impact on the metaphysics of software.

Now, one of this special properties of software is their potentially unlimited effect. Where mechanical devices are limited by matter and energy, software is completely free, not bound, only limited by its runtime, the computer. We learned how fast computer extend their capabilities. In other words, the effects caused by a running software can be very very complex.

This observation raises this question:

  • Is it possible to construct technical systems of any complexity level ? (The Constructible Hypothesis) Or - as contra statement:
  • In general, can technical systems only be grown applying evolutionary principles ? (The Grown Universe Hypothesis)

If the first question has an positive answer, the question above (existence of engineering for software) also has. These both questions also touch another aspect of build things by human: the aspect of control. To construct things means, that the creator has control over every static and dynamic element in the cause chain. The cause chains are arranged by logic reasoning, using basic laws. Here would be the place for an engineering technique for software. But exactly here begins the area of fog, because we know that not every thing can be calculated which means that a full control of any complex system of any complexity level may not possible. If calculability is equal to controllability is an open question.

But if the Constructible Hypothesis would be not true, the only other way of building systems would be to let them grow and arrange solutions by them itself. The evolutionary principle means, to provide just environment conditions (or requirements), and let the system grow. Not necessarily by the system itself like Genetic Algorithms, enhancing a software revision by revision but by human teams is also some kind of growth and evolution. BTW, this would be a place where I see agile and lean methods useful.

If we assume that the Constructible Hypothesis is right, one may ask how we could handle this. One way may be the onion model of control: a few basic laws provide the foundation to build up simple entities and their properties (like atoms from the 4 basic forces and quantum theory, for example). These entities form own rules on their level, from which bigger entities are build with der own rules (molecules, like proteins) and so on (cells, body...). Every level has its own rules, but based on the preceding level, and narrowing the space of possible behaviors. DSLs are operating on the same idea.

Now look at a cellular automata. There are also very basic rules, which generate a special behavior of the automat over time. In the game of life for example, there are patterns coming out which itself show behavior. The "slider" is such a pattern, moving over the 2D grid of the cellular automat. But there is one big difference to the onion model of control: the slider has not its own rules, its behavior is fully and only determined by the basic cellular automata rules ! That means, to control the slider and higher level patterns is only possible by the basic cellular automata rules and the initial conditions. In the onion model of control, you would have constructive possibilities on every level (you could construct different proteins to create a special effect on this level, for example).

From this, it is clear that if the cellular automata model (or let it call after John Neumann the Calculating Universe Model) is true, only the Grown Universe Hypothesis can be true. Otherwise, if the onion model is correct, than both, the Constructible Hypothesis and the Grown Universe Hypothesis have the chance to be true.

So these questions arise:

  • For control behavior of human built systems, is the Onion Model of Control true ? Or
  • For control behavior of human built systems, is the Calculating Universe Model true ?

For me, these are important questions, even if they are seem not that pragmatic :) But it is an important question to investigate what we really can do and achieve in principle in the domain of software engineering. Of course this is no exact analysis, but it should illustrate what I'm thinking about.

If some has a hint about already available work or text from philosophers, computer scientists and so on I would be glad about a pointer :)



14 February 2010

Ideas for Processing II

In the last post, I sketched the idea of a graphics subunit, which just takes render tasks describing *what* to draw, an which has the full responsibility and specialisation for *how* to draw. An graphics subunit independent in this way would have many implications, and the technology necessary to implement it goes beyond pure geometrical or mathematical techniques. Knowledge and learning of machines will also play a important role for its implementation.

Now, the questions is how we could come closer to this goal. One interesting path may be to flip the processing chain of vision: instead of recording images by an eye (camera) and process them like the humen vision system does, let us drawing pictures by using remembered pieces of images. In the wide area of computer science research

there exists already a field called image-based rendering, which tries to create renderings out from images recorded by cameras. But as extension to this more more less geometrical way of thinking, I would like to add the approach of the flipped human vision chain to the term image-based rendering (because it is simple the best term).

Where comes Processing on the scene ? Well, it is a big step to model the human vision system. One powerful and promising approach is the Hierarchical Temporal Memory Method introduced by Jeff Hawkins and his company Numenta, Inc. But for the first step, it would be too complicated, even if the underlying principles are not. I would prefer to start with small experiments, done in Processing.

One of the first experiments will be to create an Processing application, which draws some graphics with its own dynamics. It is not the objective that this graphics shows anything special. Then, I will send this application, lets call it the "Drawer", signals in form of data packages, generated by another application (Haskell or Erlang). The Drawer should then react, change its drawing dynamics according to the signal received. I want to investigate how this system can be set-up in a way that wanted graphics can be drawn by providing selected signals. BTW, the signal path is planned to be in XMPP.

The next step would be to add memory, where the course will be taken to the flipped vision processing chain. Processing is a very good tool for this, especially since it is possible to use Scala. So let's start !

06 February 2010

Requirements for a Knowledge Management Tool

We are in the information age, aren't we ? Well, we collect information, put it in big data bases, or distribute it accross the web. But in fact, we collect no information, but plain data, numbers, texts, images videos etc. All our computer based technology is designed for measurement and collection, this is their strong part. But I want to access information, and - in the best case - only that one which belongs to my problem.

So, the question is, how can we transform collected data to useful information ? Many philosophers had discussed and investigated, what knowledge, insight and information is. I love this point of view:

The humans are able to act in a free manner, but actions are conditioned by situations. In the course of the lifetime, a human experiences many situations (described by the data provided from the senses), and the actions performed in that situations induce new situations. Therefore, one thing is clear: data must be associated with situations, otherwise, they are meaningless, without any value for the existence of humans.

Another thing emerges from the necessarity of decisions: we all know, life faces us with problems, which force us to decide to go the one way or the other, to choose between options. The decision is the predecessor of the action, and the problem is the predecessor of the decision. Because we remember the association between situation, action (as consequence of a decision as a consequence of a problem) and the result, we can decide better in future, in more or less similar situations.

But background knowledge or experience for itself is not sufficient to decide well. A goal is needed, or let's say more generally, an intention. The experience (the situation-action-result tuples) tells us which action in the given situation may have the biggest chance to lead to success in respect to what we want (the goal, the intention).

Summary: facts (data) - situation (context) - problems(questions) - goals(intention) are the things which distinguish data from information: in combination, they help to determine the best action to take. For short, this is knowledge, in some sense it is the equation of facts - situation - problems - goals - actions solved for actions.

And from here on, I can tell what my requirements to a Knowledge Management Tools are (some things I can not explain more detail here):

  • allow to collect any kind of facts, any type of data
  • allow to collect context descriptions
  • allow to collect problems
  • allow to collect intentions
  • connect all these elements, by n:m relations
  • allow to weight (again) these connections at any time
  • allow to add or discard any connections at any time

Unfortunately, we have no advanced Artificial Intelligence today which would allow to code and access any kind of gray and color of the color set of life. We have to categorize and quantize what we experience every day. And we have technological limits. In respect to this, I would add this requirements:

  • visualise the connections
  • allow to classify context, problems and intentions
  • find connections automatically by statistical analysis of words or other means
  • allow to create context, problems and intentions recursively out of other contexts, problems, intentions, data.

The last requirement is inspired by the observation, that life is a steady flow:

  • allow to create contexts out of sequences of conditions or event descriptions.

Can't wait to play with a system which provides this features. So I hope, someone will implement this *few* requirements soon :-)

31 January 2010

Ideas for Processing I

Today, computer graphics is done by making geometry. Shapes like circles and rectangles, lines are placed in a canvas tagged with a (rectangular) coordinates. Colors and length are prescribed or calculated by providing numbers. In the 3D domain, the things work similar. Raytracing and other 3D to 2D renderings are based on geometric optics. Is there any problem ? Well, I will explain what I think about.

Since years, I was fascinated when Mr. Data in the Star Trek movies and serials said to the computer: "Computer, show me", and the computer draws the desired information in the best way, without being told to stretch, zoom or even use coordinates. From such scenes, I always got the vision of a computer kernel, which calculates, deduces, collects, and a graphical subunit, which does all the graphics. And the important thing for this is, both, the computer kernal and the graphics subunit, only *talk*. No API calls. In order to illustrate this, here is some example dialog:

"Hello Graphic Subunit, please display planet Venus, a starship type Klingon fighter in standard orbit"

"Hello Graphic Subunit, please display this 2D point set in a chart and this text"

In fact, the collaboration of the computer kernel and the graphics subunit should be the same as a customer which goes to an artist and says "Artist, paint a picture of me, embracing my power and glorious". Although many questions arise from this, I only want to point out the fact, that just the artist (the graphical subunit) has to bother about information which affects *how* the picture is drawn. He is the expert for graphics. The customer (the computer kernel), only should tell *what* to draw. In todays technical world, mostly the webserver or an application core has to deal with geometry and rendering. Of course, today graphics is described in abstract coordinates, presentation and content is divided by HTML and CSS. But that doesn't change the fact, that too many aspects of graphics and geometry are part of the application core. The latter has to call API by providing shapes, coordinates, colours.

Our technical possibilities are not powerful enough to come just closer to what I described above. At this point, I have to state a fundamental criticism: as far as our graphics technology is only restricted to geometry, it never will be powerful enough. Here is no place to provide reasons for this hypothesis, but it is my strong belief. To prevent misunderstandings, is important to note that, if I say geometry, I mean the mathematics as it is known and practiced today. I think it would extend our possibilities if we investigate more in things like image based rendering and to flip the human vision processing into the opposite direction. Vision then becomes rendering.

Well, I know, these are big mind steps and not a smooth chain of arguments, it is more a set of ideas. But anyway, this popped up some ideas to me for doing some experiments with Processing. The post is long already, so in a later post I will sketch some details.