22 September 2008

Back to old hobbies


These days I have returned to work with Erlang. I can not really say what was the reason or impetus for doing so, but anyway, Erland is an interesting language, now and in the next time. In order to support my work a little bit better I have implemented a very simple syntax highlighting, compilation phase and file type for Erlang in Apples Xcode. Of course Xcode has its flaws, but the general approach it provides has some advantages.

What to do in Erlang ? Well, I had never lost my interest in SVG, and so I will try a little render engine for SVG, Not because I think that is what the world is waiting for, just to combine to interesting issues, and to provide a graphical interface for my Artificial Neuronal Network and Cellular Computing engine (dream, dream). Such kind of applications may be well suited for the distributed computing approach of Erlang.

The other thing I returned to is drawing. In fact I never drop this hobby, but in the last months and years it was put in deep background because of time. That should be corrected. Here, my main interest are characters or figures, in the style of Cartoons, Comics or Oil Pastells. The picture in this post is one example, a result from a so-called Manga drawing course. Ok, the word "course" is not appropriate, it was just a kind of gathering, but the people there were really interesting.
So let's go on !

27 July 2008

The word "automatism" or Secure Software, Part II

"Automtism" is a difficult term. From principle, it should be a positive word, because would'nt it be positive if a machine takes care about things which otherwise I have to do in a boring, time consuming way ? Doesn't automatism mean to get the result with nearly no effort from me ? My observation is, that the more a person knows about computer and software, the more the term 'automatism' becomes negative. It may be the wish and the expactation to keep control, as a programmer has the control because he is the creator and master of the software. Interestingly, sometimes this exceeds the wish for comfort.

Because of this, I use the term "automatism" or "automatically" very carefully. Of course, the user should get what he want with as less effort as possible. Give him the result with a few actions. But this would not be the justification for an unforseeable, magic and unobservable behaviour of the machine. The user must be able to imagine whats going on, he always should be able to explain himself what the machine is doing (in principle, not in details, of course!). The feel of control should never vanish.

So, whe could extend the definition of Secure Software to

a secure software is one that always do what I want,
and so many times I want,
and which never let come up any doubt what I should want,
and which do what I want  for exact all my data completely or never touch them at all
and which never let come up any doubt that it is all really true.


From this insight, it is important to base the automatism on well-defined rules. I mean not formulas, or detailed algorithms. I mean top level rules, like "all you create with this software is a document which could be saved and printed" or "a crane has a rope which can be winded". They should explain how someting works in principle. Or in other words: for the small little universe (often called "domain") every machine is embedded in, such rules would be the basic metaphysics. Therefore, I will call them "universal rules".
Of course, the set of Universal Rules should be complete in the sense, that all machine behaviour could be explained with this them. The set also should be limited, such that a human can keep the hole thing in view.

If think about an new architecture to create, I always start with identifiying roles (there is more to say about roles later) and the Universal Rules set. From this, introducing classes and behaviour is a straight forward task. In fact, this is also the reason why I love to use Smalltalk in this stage of software development: it allows my to quickly try out the rules and roles, and adapt them if they not match the problem to solve. In this sense, Smalltalk is my Universal Rules explorer :-)

In this way, I belive, it is possible to design software which can do as much as possible automatically, but not exclude the user from what is going on.

29 June 2008

Taking new point of views

In the last view weeks, some new things came up which pushed me to new point of views on the world of software.
The first is the iPhone from Apple. I tried out the SDK, and I like it very much. Old feelings and memories came up from the time I wrote my software for my Diploma thesis on the NeXTStep. I like Objective-C, I like the graphics model (Display Postscript then, Qwartz or Display PDF now).... it is an exicting way to realize ideas.
But the property "new point of view" is not a consequence from the quality of the SDK or the language. The iPhone and its new way of user experience is the guide to look different to User Interfaces. Multitouch offers new ways of interactive work with data, and the limited screen size forces to reduce the optical design down to what is really needed. For example, I observed that for some tasks the iPhone is my favorite tool: there are some web sites which are optimized for iPhone (or mobile phones) so that they are more comfortable to use with such devices. And as a result, using the iPhone I only get the relevant things are displayed instead of the information flood presented on many current web pages when using the computer - and I hope there will be a lot of such optimized web pages in the future.
These two constraints - show the relevant in a small screen in a aesthetic (!) way - and make the data or the documents itself the objects of manipulation - are the reasons to decide to work in future with the iPhone and the SDK.
One remark: of course these things are not complete new, theoretic papers or books (like the one from Raskin an other work) described such problems and solutions already. Not to forget the original idea from the DynaBook of Alan Kay. But now it has practical existence, there is an interesting device, you can play with its hard- and software.

The other thing giving me a new point of view has to do with my work at the company. Here I got some insight in the field of "safety software analysis". Working in this area means to look at possible hazards, which can be caused by the software and how strong such a hazard would be if it happens. Then, in dependency which international or national standard for safety must be applied, the analysis starts to determine which probability of failure must be required from software modules or even single functions, if the hazard should happen with a given probability at most.
All these things have to do with cause chains, probability and analysis and modeling of software. Definitively my interest, but in the sum for me also a new point of view on software :-)

11 May 2008

Design, everywhere !

We are living in the age of Web 2.0. This can be seen because the nice designed Web is growing fast. CSS and modern Web Frameworks like Seaside, Ruby an Rails or Zope give people the option to match aesthetic needs and expectations. Even standard GUI frameworks provide graphical power that a real designer is needed to use this power to gain the best result. The time is gone where the limited, gray and strong shaped windows and dialog boxes been there and to which we believed to be accustomed to the last 15 years. Today we have designed Web interfaces, designed reports, designed user interfaces.

However, the word "design" faces us not only in relation to this "outer" things, the immediatly visible parts of software. Some people also think about software design, architecture design, system design and so on. But in this case, not the aesthetic aspect is adressed, these terms belonging to the inner part of the machine, the functionality, its building blocks of programming. All the things an engineer is called to bother with, not an designer.

The suprising fact is, that even GUIs are not well designed if their functionality is bad. Every developer beeing in contact with customers know this very well. In fact, all User Interfaces need both, aesthetic and usability. In consequence, I simply ask: can it be that the architecure design must have aesthetic aspects ?

The short answer is: yes ! All the years working as software developer in many different companies had teached me one thing: aesthetics is a good advisor when judging an architecture about it technical quality. As all measurement engineers know that the human eye can draw a mean line through data points very accurate, the human perception for aesthetics is very robust. But the problem is, that is is not very explainable, what "beautiful" or "ugly" is, and it is far more less explainable in what aspects "beautiful" design of a architecture guaranties a technical plus for its functionality.

So there is a mapping to be defined, which correlates aesthetic principles which technical creation rules. The goal of such a mapping would be to understand more of the mechnisms of technical design and to be able to teach it. Some simple mappings are well known, though. Symmetry is often a design gool in aesthetics as in technical design. Another is rich simplicity (about this therm I will think later). But there are a lot more, which many of us uses without beeing conscious of them.

But there is help: at the Software Composition Group at the university at Bern (Switzerland), there is a tool developped called Code City. With it, it may be possible to visualize a software such that the aesthetic senses can be used. So this tool is a translator between the technical domain and the aesthetic domain. Of course, this is only one possible solution for this mapping, but it is one we have now ;-). This tool is positioned in the context of the Moose I mentioned in other posts, and because there are efforts to port Moose to Squeak, I will hope that I can use all the other things related to Moose soon :-)

27 April 2008

Logging the map

A good logfile can help to find errors during software development in many ways. It can show which operation happens at which time. It may show important results, data, or content of variables which determines the control flow. Consequently the most common parts of logfiles are the timestamp, followed by some information about the operation performed at this time. This may be indicated by the name of a method or function, the name of an object beeing active or similar things.

Such a logfile can bee seen as a recording of things happening or occuring in time. Therefore, I will call it "software camera". It is one-dimensional, and therefore strong sequential. A program with its many branches, with its possibilities and options, will be serialized to one linear sequence of processing steps, and the "software camera" records them. This is very useful for debugging an application working on streaming data. The data itself are ordered in a strong sequence (in time), and if some errors only occure at some parts of the stream, logging as "software camera" may be the only way to find that error.

Now the question is, can be something other of interest than the time stamp and which part of the source code is active ? To explain this, I have to go into some details.

In the year 2006 and 2007, when I worked in Basel in Switzerland, I sometimes meet the guys of the Software Composition Group at Bern. I loved this, because these guys are very smart, and discussing things with them is fun. For example, I talked with Adrian Kuhn about an idea from him to create a software map. This should be like the well-known map of streets and towns, but it was not clear, which metrics should correspond to the metric "euclidean distance" as used by such a conventional map. As one possible solution, we discussed to use the distance of a data packet to the point where its processing would be ready. This idea is a consequence of data oriented design: all what matters here are the data and whats happens to them. The position of a data package would be defined by "measuring" how many processing steps are already applied and how many steps are to go. Time is only used for "speed" measurements, not as coordinate.

This thoughts were my starting point when designing a logfile for an project at my company. It is also an application acting on streaming data, and I wanted to write out information which allows me to see which data packet are at which point in the processing landscape. In detail, this was
  • identification of the specific datum (sometimes with or without values)
  • the action where the datum sits in front of
  • or the action where it is in
  • or the action where it came out.
I have not used any time stamp at all.
Going this way, it helped me to build the architecture such that it is robust to any way how datas are comming in. They could be shuffled, delayed, reorderd in many ways. Tracking the data through the scenery of processing helped my the to identify "unused roads", to small designed "roads" and important "crossings". In the next part, I will enrich the information such that I could generate SVG to create a real map. We will see.

To read more about software visualization, I suggest to look at the website of Moose , a reengineering and modeling tool. In its context there are many tools with nice ideas. It's worth to look at  - and yes, this academic stuff  *is* useful in practice!

10 April 2008

Secure Software - Part I

From time to time, I'm asking myself what secure software really is. Because if I would know what it is, I would be able find methods and tools to design and build such software. How should it look out, which properties should it have, how can I detect it ?

The trivial answer comming in mind immediatly is: 

Secure software is one which never crashes.

That's pretty simple. Is'nt it ? But - what is a "crash", what means "never" ? "Crash" often associated with the well-known "Blue-Screen". Or with the Segmentation Fault. The program is gone away after that, it disappeared. That's the same if I click on "Exit" in a menu, which is no special thing. Well, it is not exactly the same, the further is not intended, where the latter is. So, is a "crash" the situation that a software stops doing what I want ? That description would match another observation of "crash":  The program doesn't respond, the famous endless-loop. At this case, the software still exist, and it does something as well, but either it does not the thing I want, or it does it far too much.  
 
First conclusion: a secure software is one that always do what I want, and so many times I want.

The bad thing is, that I not always now what I should want. In many situations the software must tell me about my possibilities, so that I can think about what I want. Some software is very smart: it belives that it knows what I want and does that in advance, to spare time and stupid questions (from the stupid user). Now - how can a software never crash, because it does exactly what I want all the time, if I don't know what I want or if I can't tell that poor little thing what I want ? May be I don't know what I should want to achive my goal to create something great, may be I don't know the full consequences if I want this or that ? 

Second conclusion: a secure software is one that always do what I want, and so many times I want, and it never let come up any doubt what I should want to achieve my goal.

That sounds great ! But to be honest - the crash or frozen software is not the real bad thing. The real bad, evil catastrophe is that the data are killed ! That's the real reason why I would like to get a sledge hammer if faced to such a situation.....That hurts. Blue Screen or Segmentation Fault often result in bad, corrupted or even lost data. And frozen software is very good to prevent me from saving my work done in many hours. From this point of view, secure software is something which never damage my data in that sense, that they lost their value, their integrety or that they can not be processed further. Either the software can do what I want for all my data, or it never touch them at all.

Final conclusion: a secure software is one that always do what I want, and so many times I want, and which never let come up any doubt what I should want, and which do what I want  for exact all my data completely or never touch them at all.

Isn't it a great result ? But it's not all. More to come later.

02 March 2008

Live UML - explorative Modeling synchronizing

One more point on modeling with Smalltalk: now, if there is a first implementation of the application starting from the Smalltalk live model as described in the last post, it must be developed further to cover all the necessary details. Theses details popping up from requirements which were unimportant for the basic model or they are caused by the translation from the Smalltalk environment to the destination language. As a result there is an runable implementation of the original live model in the destination language which nearly provide all what the requirements demand and which also matches all demands of the destination environment. So far, so good.

But there is one drawback: the implementation is certainly drifted away from the Smalltalk model, this and the implementation are never synchronized. That would not a problem for itself, but of course, the implementation is not perfect. It has bugs, or during the implementation new ideas or problems popping up (every environment has its own pitfalls :-) Or new requirements comming into view, of course ;-). And then there is a decision needed: resynchronize the Smalltalk model with the implementation, and decide about these new requirements, problems etc. there, or throw away the model and work only with the implementation ?

Well, it depends :-) But I have set up some simple rules for making this decision more easy:

  • Don't leave the model too early. There is certainly a time, where it makes no sense to keep the live model up-to-date. The most destination systems have big differences to Smalltalk, and if one have to bother about to many details in Smalltalk, it is time to leave it. But I observed in many cases this point in time can be delayed. The model is a good tool even if the first code in the destination environment exists.
  • Don't ignore too many details. Of course the nature of modelling is to ignore some details and to focus on the building blocks of a problem. But to make the model too easy forces the necessarity to keep the model synchronized much longer then really needed. Therefore, it makes also sense to anticipate and model some things which are strongly related to the destination plattform. To say: "oh, in Smalltalk I have not this problem" makes the model too easy sometimes. Also keep track on the interfaces in the destination platform, and model them by mock objects or something similar. To ignore input and output makes the model too easy.
  • Don't implement twice. If you have the application in Smalltalk and in the destination environment, you had leave the model too late :-) . Some details, which are not part of any model, which are only things of the destination languange, which are not essential to data structures or input output, must be solved directly in the destination environment.
For a successful result, one have to gain experience for the best level of detail and the best time point to leave the model. So, I will just using it, to get this experience