This private Blog is the successor of my old Blog covering questions in the context of software, art and philosophy. Connected to this my interest for functional safe software
If Trade Marks mentioned they are hold by their owner.
All pictures and text in this blog (if not stated otherwise) (C) Hans N. Beck
04 March 2012
Black and White and the Grey between
06 November 2011
Random Sequences of Code
01 November 2011
Get me Control - no ?
16 October 2011
Hello, Siri
There are two other properties of Siri, which are special: using knowledge and using context. You have no longer talk some isolated commands like "Radio" "on". You can talk like you would do with a person - a beginner or child - but a person. And to the second, this "person" has knowledge and context awareness. By using data from location services, database, documents etc. it is possible to provide interpretation of what the user said. It is a first step in a much bigger world, the artificial understanding. When speech recognition is our planet, than artificial understanding is like offering the milky way to us.
For me its not the matter if Siri work exactly like described above or if there are many problems with it. It is the start of a new way to work with artificial intelligence. It is a hypotheses of mine that the universe works probabilistic. I belief that the physics we use today - with solutions of algebraic equations - is just the recognition of an average or of limits. In Quantum Theory and statistical thermodynamics the probabilistic nature is directly observable. And my strong belief is that the brain is - as far it relates to knowledge processing and control - and statistic engine, were one brick of it is association and pattern recognition.
09 September 2011
Agile, Architecture and other things
- How can software engineering become exactly this, engineering ?
- Is it possible to construct technical systems of any complexity level ? (The Constructible Hypothesis) Or - as contra statement:
- In general, can technical systems only be grown applying evolutionary principles ? (The Grown Universe Hypothesis)
- For control behavior of human built systems, is the Onion Model of Control true ? Or
- For control behavior of human built systems, is the Calculating Universe Model true ?
14 February 2010
Ideas for Processing II
In the last post, I sketched the idea of a graphics subunit, which just takes render tasks describing *what* to draw, an which has the full responsibility and specialisation for *how* to draw. An graphics subunit independent in this way would have many implications, and the technology necessary to implement it goes beyond pure geometrical or mathematical techniques. Knowledge and learning of machines will also play a important role for its implementation.
Now, the questions is how we could come closer to this goal. One interesting path may be to flip the processing chain of vision: instead of recording images by an eye (camera) and process them like the humen vision system does, let us drawing pictures by using remembered pieces of images. In the wide area of computer science research

there exists already a field called image-based rendering, which tries to create renderings out from images recorded by cameras. But as extension to this more more less geometrical way of thinking, I would like to add the approach of the flipped human vision chain to the term image-based rendering (because it is simple the best term).
Where comes Processing on the scene ? Well, it is a big step to model the human vision system. One powerful and promising approach is the Hierarchical Temporal Memory Method introduced by Jeff Hawkins and his company Numenta, Inc. But for the first step, it would be too complicated, even if the underlying principles are not. I would prefer to start with small experiments, done in Processing.
One of the first experiments will be to create an Processing application, which draws some graphics with its own dynamics. It is not the objective that this graphics shows anything special. Then, I will send this application, lets call it the "Drawer", signals in form of data packages, generated by another application (Haskell or Erlang). The Drawer should then react, change its drawing dynamics according to the signal received. I want to investigate how this system can be set-up in a way that wanted graphics can be drawn by providing selected signals. BTW, the signal path is planned to be in XMPP.
The next step would be to add memory, where the course will be taken to the flipped vision processing chain. Processing is a very good tool for this, especially since it is possible to use Scala. So let's start !
06 February 2010
Requirements for a Knowledge Management Tool
We are in the information age, aren't we ? Well, we collect information, put it in big data bases, or distribute it accross the web. But in fact, we collect no information, but plain data, numbers, texts, images videos etc. All our computer based technology is designed for measurement and collection, this is their strong part. But I want to access information, and - in the best case - only that one which belongs to my problem.
So, the question is, how can we transform collected data to useful information ? Many philosophers had discussed and investigated, what knowledge, insight and information is. I love this point of view:

The humans are able to act in a free manner, but actions are conditioned by situations. In the course of the lifetime, a human experiences many situations (described by the data provided from the senses), and the actions performed in that situations induce new situations. Therefore, one thing is clear: data must be associated with situations, otherwise, they are meaningless, without any value for the existence of humans.
Another thing emerges from the necessarity of decisions: we all know, life faces us with problems, which force us to decide to go the one way or the other, to choose between options. The decision is the predecessor of the action, and the problem is the predecessor of the decision. Because we remember the association between situation, action (as consequence of a decision as a consequence of a problem) and the result, we can decide better in future, in more or less similar situations.
But background knowledge or experience for itself is not sufficient to decide well. A goal is needed, or let's say more generally, an intention. The experience (the situation-action-result tuples) tells us which action in the given situation may have the biggest chance to lead to success in respect to what we want (the goal, the intention).
Summary: facts (data) - situation (context) - problems(questions) - goals(intention) are the things which distinguish data from information: in combination, they help to determine the best action to take. For short, this is knowledge, in some sense it is the equation of facts - situation - problems - goals - actions solved for actions.
And from here on, I can tell what my requirements to a Knowledge Management Tools are (some things I can not explain more detail here):
- allow to collect any kind of facts, any type of data
- allow to collect context descriptions
- allow to collect problems
- allow to collect intentions
- connect all these elements, by n:m relations
- allow to weight (again) these connections at any time
- allow to add or discard any connections at any time
Unfortunately, we have no advanced Artificial Intelligence today which would allow to code and access any kind of gray and color of the color set of life. We have to categorize and quantize what we experience every day. And we have technological limits. In respect to this, I would add this requirements:
- visualise the connections
- allow to classify context, problems and intentions
- find connections automatically by statistical analysis of words or other means
- allow to create context, problems and intentions recursively out of other contexts, problems, intentions, data.
The last requirement is inspired by the observation, that life is a steady flow:
- allow to create contexts out of sequences of conditions or event descriptions.
Can't wait to play with a system which provides this features. So I hope, someone will implement this *few* requirements soon :-)
31 January 2010
Ideas for Processing I
Today, computer graphics is done by making geometry. Shapes like circles and rectangles, lines are placed in a canvas tagged with a (rectangular) coordinates. Colors and length are prescribed or calculated by providing numbers. In the 3D domain, the things work similar. Raytracing and other 3D to 2D renderings are based on geometric optics. Is there any problem ? Well, I will explain what I think about.

Since years, I was fascinated when Mr. Data in the Star Trek movies and serials said to the computer: "Computer, show me", and the computer draws the desired information in the best way, without being told to stretch, zoom or even use coordinates. From such scenes, I always got the vision of a computer kernel, which calculates, deduces, collects, and a graphical subunit, which does all the graphics. And the important thing for this is, both, the computer kernal and the graphics subunit, only *talk*. No API calls. In order to illustrate this, here is some example dialog:
"Hello Graphic Subunit, please display planet Venus, a starship type Klingon fighter in standard orbit"
"Hello Graphic Subunit, please display this 2D point set in a chart and this text"
In fact, the collaboration of the computer kernel and the graphics subunit should be the same as a customer which goes to an artist and says "Artist, paint a picture of me, embracing my power and glorious". Although many questions arise from this, I only want to point out the fact, that just the artist (the graphical subunit) has to bother about information which affects *how* the picture is drawn. He is the expert for graphics. The customer (the computer kernel), only should tell *what* to draw. In todays technical world, mostly the webserver or an application core has to deal with geometry and rendering. Of course, today graphics is described in abstract coordinates, presentation and content is divided by HTML and CSS. But that doesn't change the fact, that too many aspects of graphics and geometry are part of the application core. The latter has to call API by providing shapes, coordinates, colours.
Our technical possibilities are not powerful enough to come just closer to what I described above. At this point, I have to state a fundamental criticism: as far as our graphics technology is only restricted to geometry, it never will be powerful enough. Here is no place to provide reasons for this hypothesis, but it is my strong belief. To prevent misunderstandings, is important to note that, if I say geometry, I mean the mathematics as it is known and practiced today. I think it would extend our possibilities if we investigate more in things like image based rendering and to flip the human vision processing into the opposite direction. Vision then becomes rendering.
Well, I know, these are big mind steps and not a smooth chain of arguments, it is more a set of ideas. But anyway, this popped up some ideas to me for doing some experiments with Processing. The post is long already, so in a later post I will sketch some details.