14 December 2010

Programming on the iPad

Since a half year I use my iPad every day - for reading emails, reading the Web, reading my Perry Rhodan issues and other magazines and books. A few weeks ago, I started using iPad to write protocols and use the Numbers App from Apple more and more.
What I am missing is to write programs. I'm not talking about big software but really useful small programs. How could that be?

The iPad is the accessor to the Web. In consequence, programming on the iPad doesn't mean to program the iPad itself, but to program in the Web. Thanks to Google and others, office documents can be produced and changed in the Web. So why not programs ?

But what would a Web - or better said - Cloud hosted program look like ? In the age before the cloud, creating software was done by accessing the capabilities of the operating system - the APIs - to fetch data, build graphics, to persist data and create a data and control flow with a programming language. Well, the Cloud has no operating system, but it has also capabilities to fetch or send data, to create graphics via SVG or WebGL and to store data. Even such kind of API are available, which in fact are the REST, SOAP or other interfaces. Twitter, Facebook and other portals use such techniques. The problem is the language. If there would be such kind of editor in the web, and a programming language - or let's better say if there would be a Cloud hosted development environment, which would allow to use and bind this "Web API" with data and control flow, yes, then I could program with the iPad. I could program algorithms, create visualizations and would not have to think much to store data, or to access data. Where my programs are stored would not be relevant at all. Just make data processing in it's strikt and pure sense.

Unfortunately, I don't know such a Web Development Environment. Is there any ?

21 March 2010

Future of Functional Safe Systems

One of my responsibilities in my job is functional safety. Roughly speaken, functional safety means that running the software does not induce any unaccaptable risk. Typical software applications which are safety relevant are avionics software, train control applications or medical devices. Today, the general approach to gain functional safety for an application is formed part of two basic things:


  1. to demonstrate, that the software works correctly, which means that it works as the requirements has stated, and
  2. that everthing is done to avoid errors rooted in human factors.


There are many safety standards in civil and military area, which prescribe methods to meet the objectives 1) and 2) in a repeatable, predictable and documented way. Many best practices of hardware and software engineering have influenced this standards.


The problem is, that there is an underlaying assumption, which is critical from my point of view: that the hardware/sofware systems are complete deterministic in the sense that any state of the system can be predicted and tested for any time point. We are familiar with this from the classical physics: if an differential equation of a system is given, we can - at least in principle - predict the system state for any time point.


But it is not hard to see that all systems getting more and more complicated. To prove functional safety requires more and more effort, and get more and more gaps. We know, that systems exist with very complicated behaviour: fractal systems or cellular automats are well-kown examples. Often, the only chance to predict their behaviour is to run the system from the beginning (the boudary conditions) up to the point of interest. And we know, that complicated interwoven structures like technical systems emerging today can behave unpredictable or "chaotic". They look like indeterministic systems (even if there is no indeterministic given in a strict mathematical sense).



And at this point, we are lost with our approach to functional safety. So what can we do ? Give up the objective to prove every function or behavior for correctness ! Huh, that would mean to allow errors, even not anticipated ones. Yes ! The future safe fault tolerant system (FSFTS)have to be designed that they can operate with errors, even unpredictable errors. Or in other words: the engineering methodology and practice have to be aligned to the objective that a system is safe not because we know every static and dynamic detail of it, but because errors does not harm the system (and if you see security issues as errors, it could be really interesting....).


Now, can we find conditions or characteristics of such systems, to find an start ? In fact, I always thought about it but I have no resilient conclusions yet. Many experiments have to be done for this. Anyway here are some basic assumptions I can share:


Assume, the state trajectory of a FSFTS is not known in detail for any time point, but it is limited in state space and constrainted by some attractors (well-known example: Lorentz attractor). In fact, a single line in state space will become a band of lines. The state trajectory is looped, but not necessarily closed in itself. Then, an error might move the system trajectory, but within this band, and the attractors ensure that the system will kept in its bounds.


In the next step, assume this sysem state trajectory will be partitioned in trajectories of several subsystems. Now if some attractors would influence some of other subsystems, we would get connected systems which would emerge in the sum functionality. In other words: I would expect that in future system design would mean to engineer attractors system state boundaries and trajectory bands. Engineering would mean not to engineer the system trajectory time point for time point, by cause and effect, but to engineer higher dynamics and (in behaviour) complex systems. I doubt that the available mathematical tools would help us here, I think we need some new basic insights in the area of complex chaotic systems. In this context it is also interesting to the progress and methods of system biology.


Well, we are far away from this, and many questions left. But I personally believe that will be the future of engineering.

14 February 2010

Ideas for Processing II

In the last post, I sketched the idea of a graphics subunit, which just takes render tasks describing *what* to draw, an which has the full responsibility and specialisation for *how* to draw. An graphics subunit independent in this way would have many implications, and the technology necessary to implement it goes beyond pure geometrical or mathematical techniques. Knowledge and learning of machines will also play a important role for its implementation.

Now, the questions is how we could come closer to this goal. One interesting path may be to flip the processing chain of vision: instead of recording images by an eye (camera) and process them like the humen vision system does, let us drawing pictures by using remembered pieces of images. In the wide area of computer science research

there exists already a field called image-based rendering, which tries to create renderings out from images recorded by cameras. But as extension to this more more less geometrical way of thinking, I would like to add the approach of the flipped human vision chain to the term image-based rendering (because it is simple the best term).

Where comes Processing on the scene ? Well, it is a big step to model the human vision system. One powerful and promising approach is the Hierarchical Temporal Memory Method introduced by Jeff Hawkins and his company Numenta, Inc. But for the first step, it would be too complicated, even if the underlying principles are not. I would prefer to start with small experiments, done in Processing.

One of the first experiments will be to create an Processing application, which draws some graphics with its own dynamics. It is not the objective that this graphics shows anything special. Then, I will send this application, lets call it the "Drawer", signals in form of data packages, generated by another application (Haskell or Erlang). The Drawer should then react, change its drawing dynamics according to the signal received. I want to investigate how this system can be set-up in a way that wanted graphics can be drawn by providing selected signals. BTW, the signal path is planned to be in XMPP.

The next step would be to add memory, where the course will be taken to the flipped vision processing chain. Processing is a very good tool for this, especially since it is possible to use Scala. So let's start !

06 February 2010

Requirements for a Knowledge Management Tool

We are in the information age, aren't we ? Well, we collect information, put it in big data bases, or distribute it accross the web. But in fact, we collect no information, but plain data, numbers, texts, images videos etc. All our computer based technology is designed for measurement and collection, this is their strong part. But I want to access information, and - in the best case - only that one which belongs to my problem.

So, the question is, how can we transform collected data to useful information ? Many philosophers had discussed and investigated, what knowledge, insight and information is. I love this point of view:

The humans are able to act in a free manner, but actions are conditioned by situations. In the course of the lifetime, a human experiences many situations (described by the data provided from the senses), and the actions performed in that situations induce new situations. Therefore, one thing is clear: data must be associated with situations, otherwise, they are meaningless, without any value for the existence of humans.

Another thing emerges from the necessarity of decisions: we all know, life faces us with problems, which force us to decide to go the one way or the other, to choose between options. The decision is the predecessor of the action, and the problem is the predecessor of the decision. Because we remember the association between situation, action (as consequence of a decision as a consequence of a problem) and the result, we can decide better in future, in more or less similar situations.

But background knowledge or experience for itself is not sufficient to decide well. A goal is needed, or let's say more generally, an intention. The experience (the situation-action-result tuples) tells us which action in the given situation may have the biggest chance to lead to success in respect to what we want (the goal, the intention).

Summary: facts (data) - situation (context) - problems(questions) - goals(intention) are the things which distinguish data from information: in combination, they help to determine the best action to take. For short, this is knowledge, in some sense it is the equation of facts - situation - problems - goals - actions solved for actions.

And from here on, I can tell what my requirements to a Knowledge Management Tools are (some things I can not explain more detail here):

  • allow to collect any kind of facts, any type of data
  • allow to collect context descriptions
  • allow to collect problems
  • allow to collect intentions
  • connect all these elements, by n:m relations
  • allow to weight (again) these connections at any time
  • allow to add or discard any connections at any time

Unfortunately, we have no advanced Artificial Intelligence today which would allow to code and access any kind of gray and color of the color set of life. We have to categorize and quantize what we experience every day. And we have technological limits. In respect to this, I would add this requirements:

  • visualise the connections
  • allow to classify context, problems and intentions
  • find connections automatically by statistical analysis of words or other means
  • allow to create context, problems and intentions recursively out of other contexts, problems, intentions, data.

The last requirement is inspired by the observation, that life is a steady flow:

  • allow to create contexts out of sequences of conditions or event descriptions.

Can't wait to play with a system which provides this features. So I hope, someone will implement this *few* requirements soon :-)

31 January 2010

Ideas for Processing I

Today, computer graphics is done by making geometry. Shapes like circles and rectangles, lines are placed in a canvas tagged with a (rectangular) coordinates. Colors and length are prescribed or calculated by providing numbers. In the 3D domain, the things work similar. Raytracing and other 3D to 2D renderings are based on geometric optics. Is there any problem ? Well, I will explain what I think about.

Since years, I was fascinated when Mr. Data in the Star Trek movies and serials said to the computer: "Computer, show me", and the computer draws the desired information in the best way, without being told to stretch, zoom or even use coordinates. From such scenes, I always got the vision of a computer kernel, which calculates, deduces, collects, and a graphical subunit, which does all the graphics. And the important thing for this is, both, the computer kernal and the graphics subunit, only *talk*. No API calls. In order to illustrate this, here is some example dialog:

"Hello Graphic Subunit, please display planet Venus, a starship type Klingon fighter in standard orbit"

"Hello Graphic Subunit, please display this 2D point set in a chart and this text"

In fact, the collaboration of the computer kernel and the graphics subunit should be the same as a customer which goes to an artist and says "Artist, paint a picture of me, embracing my power and glorious". Although many questions arise from this, I only want to point out the fact, that just the artist (the graphical subunit) has to bother about information which affects *how* the picture is drawn. He is the expert for graphics. The customer (the computer kernel), only should tell *what* to draw. In todays technical world, mostly the webserver or an application core has to deal with geometry and rendering. Of course, today graphics is described in abstract coordinates, presentation and content is divided by HTML and CSS. But that doesn't change the fact, that too many aspects of graphics and geometry are part of the application core. The latter has to call API by providing shapes, coordinates, colours.

Our technical possibilities are not powerful enough to come just closer to what I described above. At this point, I have to state a fundamental criticism: as far as our graphics technology is only restricted to geometry, it never will be powerful enough. Here is no place to provide reasons for this hypothesis, but it is my strong belief. To prevent misunderstandings, is important to note that, if I say geometry, I mean the mathematics as it is known and practiced today. I think it would extend our possibilities if we investigate more in things like image based rendering and to flip the human vision processing into the opposite direction. Vision then becomes rendering.

Well, I know, these are big mind steps and not a smooth chain of arguments, it is more a set of ideas. But anyway, this popped up some ideas to me for doing some experiments with Processing. The post is long already, so in a later post I will sketch some details.

17 January 2010

Time to Change

In the last year, many things happened in my life as a computer professional. I am writing no code anymore, I am now working as a safety, quality and requirements engineer. And I am glad about it. Let me explain, why.

In the last years, more and more the doubt came up to me if programming is really that what I am strong in and if it is what I enjoy. Even working in different companies, where completely different software were developed, in Smalltalk, Fortran, C, C++ and and means, I always had the feeling that I doing the same frequently. The problems of software developing repeat, the kind of solutions were well-known. I met many people, all of their own style, I had bosses which could never be compared, which was and is good. But the game - software development - still was the same from my point of view. Different in colors, in tones, in details, but in fact the same. No big progress, as also some well-kown mastes of software told. And I had and have so much more interests.



I know, this picture is not objective, my losing interest and my perception of software buisness have impact on each other and are not decoupled. But which is really the cause of what doesn't matter in consequence. So I decided to take a way in my carrier which lead me away from software development. That's why I am happy to work as I told above.

But what about the hobby ? On the private side, I looked at many programming langauges, read about software processes, design patterns and new approaches in the software domain. But the doubts described above take place here again. And at the same time, old loves came back to me: philosophy, electromagnetics, teaching electronically, the question how a set of neurons can think and get insight.

I've just finished the great book "Coders at Work" of Peter Seibel, which contains interviews with some big persons of the software buisness. And the part with L Peter Deutsch, which leaves the software buisness at some late point in his life, convinced me to make this decision: even as a hobby, software development will no more the objective of my hobby activities. This means, to be precise, to look at patterns, architectures, languages for its own.

Of course, I will program to let my new ideas come to life, maybe games, graphics, experiments. But the ideas are now in front, not the question what is the best language, the best approach. I make no attempt anymore to make software development better (one exception: requirements and safety engineering). That let be the task of computer scientists, which I am not. Just be creative, as long it is possible.

For this goal, Scala and Processing had attracted my interest, which both working with the Java environment. The Java environment has so much possibilities. Erlang is nice for experiments in the artificial neural network field.

But don't misunderstand: I will not close my eyes. I will look how technology and software science is going on, because to track and judge the social impact of new things is important. But my role will be from now that of an observer, not an actor of the computer science.