17 November 2008

Requirement landscape with Croquet

Requirement Engineering is something which I'm looking at for many years now. And I always was and be interested in this. But one thing I am still waiting for: the visualisation of requirements.
So one might ask: visualisation ? Requirements are texutal, semantic-rich objects, nothing which would come close to that what is needed to making charts or some interesting 3D pictures. Well, this may be right. But requirements have structure: they are related to other requirements, they build groups. In addition, they have an inner structure, if they are built using methods available for professional Requirement Engineering. Visualizing requirements related to such structure aspects would be the same as visualize the network or graph of requirements. Clusters or too sparse regions of the requirement network should be visible at once.
But I think, there could be done more. The set of requirements should describe as complete as possible the system to build or to engineer. This set as a whole should be able to induce the picture of software. By reading the requirements, one after one, this picture becomes existent in the mind of the reader, and its shape and details become more and more complete with every requirement read.

But: wouldn't it be nice to visualize exactly that process ?

In the result, this visualization would show which requirements covers to which extent which aspect of a system. For example, the system would be a landscape, or a city street plan, and the size and position of the requirements show there contribution to the whole (system).
Since a while, I asked myself if Croquet would be a optimal tool for such visualizations. The space metapher and even more the collaboration possibilities of Croquet seems promising for such visualization experiments. Unfortunatly, I have no time, so I really hope that some other will do some experiments in this direction.
This is I would do:

  • defining a meta-model of requirements, to adress the aspect of their inner structure, just like EMOF for conventional models. I mean not a semantic model, only a structural model (Conditions, quantors, action, object, subject, test criteria ...)
  • defining a meta-model for describing always needed aspects (=questions) for a software project, let's call it system meta model. You could imagine this as a checklist of questions need to be answered in most projects.
  • defining a semantic meta model, to describe the relations between semantic in general, requirement parts, and the system meta model.

That are only raw ideas. Too bad, I had no time.....

09 November 2008

Decisions of a Software Architect

In the past, it was not so clear for me what are the things which are constitutive for the work of a Software Architect. This is especially true if the question comes up how to distinguish a Software Architect from a Software Engineer. At the moment, my work for the company is made up of UML, pure architecture design and requirement engineering. From this, I remarked some things which may be surprisingly important aspects of the work of a Software Architect. I will call them "surprising", not because I never thought about them, but because their real effect and importance for the practice hit me strong in these days.

If a new system has to be made, it all starts with requirements. Everyone know that. Some people regard requirement engineering as collecting a bunch of text or a big set of items in a database in order to describe some properties of the system to build. For me, requirement engineering is much more: it is the search for and identification of the model describing the customer universe at best. Such a model must describe the processes and informations which constitute this universe and in which the new system will be embedded. We will see why this is important.

From theory I know about the value of Use Cases and UML Activity Diagrams in the context of requirement engineering, but I was surprised how useful they are to perform that requirement engineering as desired above. By bounding requirements to Activity Diagrams and Use Cases, it is possible to get an output of a requirement engineering phase which - beside the description of the properties of the new system - give an answer to the question, which actions are performed by who, which are the needed informations, which are the resulting informations and how the system to build will fit in this universe.

This all is the input for the Software Architect, and he must use it to find answers to this question:

Which are the parts of the system to build, and how must they interact ?

Well, this seems trivial, and it is even more easy if processes and information flow in the customer univers is known. But now comes a surprising aspect:

Which of this parts of the system are described at best by the data they use and compute, and which parts are described at best by the processes they perform ?

The answer to this depends not only on the properties of the future system, it depends strongly on the nature of the processes and information flow in the customer universe. The outcome of the architectural decision has very long-range consequences because it determines the complete design- and implementation process: a data-driven architecture has other characteristics than a process-driven architecture. Shure, data and processes are not independend. But for the technical design process it is different if one think first in terms of data or if one think in terms of a sequence of actions.

In the data driven world, the next decisions are related to questions like who, how often and in which way the data access will happen. In the process related world, the Software Architect has to decide about events, sequences and has to work with the time.

Beeing at this stage, the next suprising aspect, the next decision point has to be regarded, which as hughe impact to the design:

What are the points of possible change ?

Here, the Software Architect must invest phantasy and experience to identifiy the "weak" parts in the required system. He must look for the things which may be not so fix as the requirements suggest, and as a result, he must decide what things of the system (roles, data, actions) must be designed in a way that a change would be easy. This is only possible if the requirements describe more than the system itself. I think that the best sources to find such points of possible changes are the relations of the system to the universe of the customer.

And here, we are at the heart of the work of an Software Architect, as I belive it: find a design, which make change in the forseeable way easy. This gives the material to make further design decisions, it helps to reduce the set of possible solutions for a design task, to make the system that what is every time told to us: make it simple, but not simpler. Design for change, but only for that, which will be needed. The power of design for change is that what me really surpised ....

Now the detail work can start. But one thing remain, which I often discussed with colleagues: what about technical constraints ? Must an Software Architect know about the possibilities and properties of certain technical solutions ? Well, at this stage, my assumption is that he must not. In the opposite: the Software Architect must find a design, which can adapt certain techniqal constraints. So the decision point would be:

How can the design made robust against the technical constraints and implementation ?

If writing an required amount of data on hard disk is not possible by simply writing to one single file, an architecture is needed which handles multiple files. The Software Architect can not know all detailed constraints which comes from the many-years experience of a software developer, or from a data base specialist, a web specialist etc. In addition, such constraints are change points "from the bottom". Yes, the architect must have a rough knowledge of what is possible with today techniques, but only in order to be not too unrealistic. Therefore, a Software Architect must keep contact with specialists . By the way, here I also see the border to the role Software Engineer: this one must have detailed technical knowledge, and therefore may be responsible for the "lower" or detailed parts of an architecture. He has to bother with detailed technical details.

So, as a summery this decisions help to come to architectural decisions:

- What are the roles, and the parts of an system, and how interact they ?
- Which parts are data driven, which are process driven ?
- What are the points of possible change in the customers universe?
- What are the points of possible change from the technical implementations ?


Of course, it will not be all of the Software Architect, but it really helps ;-)

22 September 2008

Back to old hobbies


These days I have returned to work with Erlang. I can not really say what was the reason or impetus for doing so, but anyway, Erland is an interesting language, now and in the next time. In order to support my work a little bit better I have implemented a very simple syntax highlighting, compilation phase and file type for Erlang in Apples Xcode. Of course Xcode has its flaws, but the general approach it provides has some advantages.

What to do in Erlang ? Well, I had never lost my interest in SVG, and so I will try a little render engine for SVG, Not because I think that is what the world is waiting for, just to combine to interesting issues, and to provide a graphical interface for my Artificial Neuronal Network and Cellular Computing engine (dream, dream). Such kind of applications may be well suited for the distributed computing approach of Erlang.

The other thing I returned to is drawing. In fact I never drop this hobby, but in the last months and years it was put in deep background because of time. That should be corrected. Here, my main interest are characters or figures, in the style of Cartoons, Comics or Oil Pastells. The picture in this post is one example, a result from a so-called Manga drawing course. Ok, the word "course" is not appropriate, it was just a kind of gathering, but the people there were really interesting.
So let's go on !

27 July 2008

The word "automatism" or Secure Software, Part II

"Automtism" is a difficult term. From principle, it should be a positive word, because would'nt it be positive if a machine takes care about things which otherwise I have to do in a boring, time consuming way ? Doesn't automatism mean to get the result with nearly no effort from me ? My observation is, that the more a person knows about computer and software, the more the term 'automatism' becomes negative. It may be the wish and the expactation to keep control, as a programmer has the control because he is the creator and master of the software. Interestingly, sometimes this exceeds the wish for comfort.

Because of this, I use the term "automatism" or "automatically" very carefully. Of course, the user should get what he want with as less effort as possible. Give him the result with a few actions. But this would not be the justification for an unforseeable, magic and unobservable behaviour of the machine. The user must be able to imagine whats going on, he always should be able to explain himself what the machine is doing (in principle, not in details, of course!). The feel of control should never vanish.

So, whe could extend the definition of Secure Software to

a secure software is one that always do what I want,
and so many times I want,
and which never let come up any doubt what I should want,
and which do what I want  for exact all my data completely or never touch them at all
and which never let come up any doubt that it is all really true.


From this insight, it is important to base the automatism on well-defined rules. I mean not formulas, or detailed algorithms. I mean top level rules, like "all you create with this software is a document which could be saved and printed" or "a crane has a rope which can be winded". They should explain how someting works in principle. Or in other words: for the small little universe (often called "domain") every machine is embedded in, such rules would be the basic metaphysics. Therefore, I will call them "universal rules".
Of course, the set of Universal Rules should be complete in the sense, that all machine behaviour could be explained with this them. The set also should be limited, such that a human can keep the hole thing in view.

If think about an new architecture to create, I always start with identifiying roles (there is more to say about roles later) and the Universal Rules set. From this, introducing classes and behaviour is a straight forward task. In fact, this is also the reason why I love to use Smalltalk in this stage of software development: it allows my to quickly try out the rules and roles, and adapt them if they not match the problem to solve. In this sense, Smalltalk is my Universal Rules explorer :-)

In this way, I belive, it is possible to design software which can do as much as possible automatically, but not exclude the user from what is going on.

29 June 2008

Taking new point of views

In the last view weeks, some new things came up which pushed me to new point of views on the world of software.
The first is the iPhone from Apple. I tried out the SDK, and I like it very much. Old feelings and memories came up from the time I wrote my software for my Diploma thesis on the NeXTStep. I like Objective-C, I like the graphics model (Display Postscript then, Qwartz or Display PDF now).... it is an exicting way to realize ideas.
But the property "new point of view" is not a consequence from the quality of the SDK or the language. The iPhone and its new way of user experience is the guide to look different to User Interfaces. Multitouch offers new ways of interactive work with data, and the limited screen size forces to reduce the optical design down to what is really needed. For example, I observed that for some tasks the iPhone is my favorite tool: there are some web sites which are optimized for iPhone (or mobile phones) so that they are more comfortable to use with such devices. And as a result, using the iPhone I only get the relevant things are displayed instead of the information flood presented on many current web pages when using the computer - and I hope there will be a lot of such optimized web pages in the future.
These two constraints - show the relevant in a small screen in a aesthetic (!) way - and make the data or the documents itself the objects of manipulation - are the reasons to decide to work in future with the iPhone and the SDK.
One remark: of course these things are not complete new, theoretic papers or books (like the one from Raskin an other work) described such problems and solutions already. Not to forget the original idea from the DynaBook of Alan Kay. But now it has practical existence, there is an interesting device, you can play with its hard- and software.

The other thing giving me a new point of view has to do with my work at the company. Here I got some insight in the field of "safety software analysis". Working in this area means to look at possible hazards, which can be caused by the software and how strong such a hazard would be if it happens. Then, in dependency which international or national standard for safety must be applied, the analysis starts to determine which probability of failure must be required from software modules or even single functions, if the hazard should happen with a given probability at most.
All these things have to do with cause chains, probability and analysis and modeling of software. Definitively my interest, but in the sum for me also a new point of view on software :-)

11 May 2008

Design, everywhere !

We are living in the age of Web 2.0. This can be seen because the nice designed Web is growing fast. CSS and modern Web Frameworks like Seaside, Ruby an Rails or Zope give people the option to match aesthetic needs and expectations. Even standard GUI frameworks provide graphical power that a real designer is needed to use this power to gain the best result. The time is gone where the limited, gray and strong shaped windows and dialog boxes been there and to which we believed to be accustomed to the last 15 years. Today we have designed Web interfaces, designed reports, designed user interfaces.

However, the word "design" faces us not only in relation to this "outer" things, the immediatly visible parts of software. Some people also think about software design, architecture design, system design and so on. But in this case, not the aesthetic aspect is adressed, these terms belonging to the inner part of the machine, the functionality, its building blocks of programming. All the things an engineer is called to bother with, not an designer.

The suprising fact is, that even GUIs are not well designed if their functionality is bad. Every developer beeing in contact with customers know this very well. In fact, all User Interfaces need both, aesthetic and usability. In consequence, I simply ask: can it be that the architecure design must have aesthetic aspects ?

The short answer is: yes ! All the years working as software developer in many different companies had teached me one thing: aesthetics is a good advisor when judging an architecture about it technical quality. As all measurement engineers know that the human eye can draw a mean line through data points very accurate, the human perception for aesthetics is very robust. But the problem is, that is is not very explainable, what "beautiful" or "ugly" is, and it is far more less explainable in what aspects "beautiful" design of a architecture guaranties a technical plus for its functionality.

So there is a mapping to be defined, which correlates aesthetic principles which technical creation rules. The goal of such a mapping would be to understand more of the mechnisms of technical design and to be able to teach it. Some simple mappings are well known, though. Symmetry is often a design gool in aesthetics as in technical design. Another is rich simplicity (about this therm I will think later). But there are a lot more, which many of us uses without beeing conscious of them.

But there is help: at the Software Composition Group at the university at Bern (Switzerland), there is a tool developped called Code City. With it, it may be possible to visualize a software such that the aesthetic senses can be used. So this tool is a translator between the technical domain and the aesthetic domain. Of course, this is only one possible solution for this mapping, but it is one we have now ;-). This tool is positioned in the context of the Moose I mentioned in other posts, and because there are efforts to port Moose to Squeak, I will hope that I can use all the other things related to Moose soon :-)

27 April 2008

Logging the map

A good logfile can help to find errors during software development in many ways. It can show which operation happens at which time. It may show important results, data, or content of variables which determines the control flow. Consequently the most common parts of logfiles are the timestamp, followed by some information about the operation performed at this time. This may be indicated by the name of a method or function, the name of an object beeing active or similar things.

Such a logfile can bee seen as a recording of things happening or occuring in time. Therefore, I will call it "software camera". It is one-dimensional, and therefore strong sequential. A program with its many branches, with its possibilities and options, will be serialized to one linear sequence of processing steps, and the "software camera" records them. This is very useful for debugging an application working on streaming data. The data itself are ordered in a strong sequence (in time), and if some errors only occure at some parts of the stream, logging as "software camera" may be the only way to find that error.

Now the question is, can be something other of interest than the time stamp and which part of the source code is active ? To explain this, I have to go into some details.

In the year 2006 and 2007, when I worked in Basel in Switzerland, I sometimes meet the guys of the Software Composition Group at Bern. I loved this, because these guys are very smart, and discussing things with them is fun. For example, I talked with Adrian Kuhn about an idea from him to create a software map. This should be like the well-known map of streets and towns, but it was not clear, which metrics should correspond to the metric "euclidean distance" as used by such a conventional map. As one possible solution, we discussed to use the distance of a data packet to the point where its processing would be ready. This idea is a consequence of data oriented design: all what matters here are the data and whats happens to them. The position of a data package would be defined by "measuring" how many processing steps are already applied and how many steps are to go. Time is only used for "speed" measurements, not as coordinate.

This thoughts were my starting point when designing a logfile for an project at my company. It is also an application acting on streaming data, and I wanted to write out information which allows me to see which data packet are at which point in the processing landscape. In detail, this was
  • identification of the specific datum (sometimes with or without values)
  • the action where the datum sits in front of
  • or the action where it is in
  • or the action where it came out.
I have not used any time stamp at all.
Going this way, it helped me to build the architecture such that it is robust to any way how datas are comming in. They could be shuffled, delayed, reorderd in many ways. Tracking the data through the scenery of processing helped my the to identify "unused roads", to small designed "roads" and important "crossings". In the next part, I will enrich the information such that I could generate SVG to create a real map. We will see.

To read more about software visualization, I suggest to look at the website of Moose , a reengineering and modeling tool. In its context there are many tools with nice ideas. It's worth to look at  - and yes, this academic stuff  *is* useful in practice!

10 April 2008

Secure Software - Part I

From time to time, I'm asking myself what secure software really is. Because if I would know what it is, I would be able find methods and tools to design and build such software. How should it look out, which properties should it have, how can I detect it ?

The trivial answer comming in mind immediatly is: 

Secure software is one which never crashes.

That's pretty simple. Is'nt it ? But - what is a "crash", what means "never" ? "Crash" often associated with the well-known "Blue-Screen". Or with the Segmentation Fault. The program is gone away after that, it disappeared. That's the same if I click on "Exit" in a menu, which is no special thing. Well, it is not exactly the same, the further is not intended, where the latter is. So, is a "crash" the situation that a software stops doing what I want ? That description would match another observation of "crash":  The program doesn't respond, the famous endless-loop. At this case, the software still exist, and it does something as well, but either it does not the thing I want, or it does it far too much.  
 
First conclusion: a secure software is one that always do what I want, and so many times I want.

The bad thing is, that I not always now what I should want. In many situations the software must tell me about my possibilities, so that I can think about what I want. Some software is very smart: it belives that it knows what I want and does that in advance, to spare time and stupid questions (from the stupid user). Now - how can a software never crash, because it does exactly what I want all the time, if I don't know what I want or if I can't tell that poor little thing what I want ? May be I don't know what I should want to achive my goal to create something great, may be I don't know the full consequences if I want this or that ? 

Second conclusion: a secure software is one that always do what I want, and so many times I want, and it never let come up any doubt what I should want to achieve my goal.

That sounds great ! But to be honest - the crash or frozen software is not the real bad thing. The real bad, evil catastrophe is that the data are killed ! That's the real reason why I would like to get a sledge hammer if faced to such a situation.....That hurts. Blue Screen or Segmentation Fault often result in bad, corrupted or even lost data. And frozen software is very good to prevent me from saving my work done in many hours. From this point of view, secure software is something which never damage my data in that sense, that they lost their value, their integrety or that they can not be processed further. Either the software can do what I want for all my data, or it never touch them at all.

Final conclusion: a secure software is one that always do what I want, and so many times I want, and which never let come up any doubt what I should want, and which do what I want  for exact all my data completely or never touch them at all.

Isn't it a great result ? But it's not all. More to come later.

02 March 2008

Live UML - explorative Modeling synchronizing

One more point on modeling with Smalltalk: now, if there is a first implementation of the application starting from the Smalltalk live model as described in the last post, it must be developed further to cover all the necessary details. Theses details popping up from requirements which were unimportant for the basic model or they are caused by the translation from the Smalltalk environment to the destination language. As a result there is an runable implementation of the original live model in the destination language which nearly provide all what the requirements demand and which also matches all demands of the destination environment. So far, so good.

But there is one drawback: the implementation is certainly drifted away from the Smalltalk model, this and the implementation are never synchronized. That would not a problem for itself, but of course, the implementation is not perfect. It has bugs, or during the implementation new ideas or problems popping up (every environment has its own pitfalls :-) Or new requirements comming into view, of course ;-). And then there is a decision needed: resynchronize the Smalltalk model with the implementation, and decide about these new requirements, problems etc. there, or throw away the model and work only with the implementation ?

Well, it depends :-) But I have set up some simple rules for making this decision more easy:

  • Don't leave the model too early. There is certainly a time, where it makes no sense to keep the live model up-to-date. The most destination systems have big differences to Smalltalk, and if one have to bother about to many details in Smalltalk, it is time to leave it. But I observed in many cases this point in time can be delayed. The model is a good tool even if the first code in the destination environment exists.
  • Don't ignore too many details. Of course the nature of modelling is to ignore some details and to focus on the building blocks of a problem. But to make the model too easy forces the necessarity to keep the model synchronized much longer then really needed. Therefore, it makes also sense to anticipate and model some things which are strongly related to the destination plattform. To say: "oh, in Smalltalk I have not this problem" makes the model too easy sometimes. Also keep track on the interfaces in the destination platform, and model them by mock objects or something similar. To ignore input and output makes the model too easy.
  • Don't implement twice. If you have the application in Smalltalk and in the destination environment, you had leave the model too late :-) . Some details, which are not part of any model, which are only things of the destination languange, which are not essential to data structures or input output, must be solved directly in the destination environment.
For a successful result, one have to gain experience for the best level of detail and the best time point to leave the model. So, I will just using it, to get this experience

24 February 2008

Live UML - Explorative Modeling in the small

Looking at the applications programmed in the world, and looking at the programming languages used, the situation seems not positive for Smalltalk. But for me, Smalltalk is not dead. In the opposite: for me it has growing value, although my daily life is determined by C and C++. How can this be ?

If I have to build a new application or a new module, today I use Smalltalk to design the architecture. In general, if one has to do something new, classes must be identified, what they are doing, which data they work on and more. Extensibility (for new requirements) must be kept on focus, and that there will come an interface into life which could be understand and used by others. Here can help Smalltalk a lot: a few objects doing something is written in short time. Every rearrangement and new ideas can be implemented quickly. Because Smalltalk allows to modify code even in debugger, and allows to evaluate any code snipped at any place - the right design grows in a way also evolution works: simple try it. It is growing in the best sense.

At the end, I got a set of objects, a set of methods and communication network (who talks to who), which can carry the basic functionality. From this, it is easy to get UML - for communicating with managers - and the implementation in the destination language. Now, if - as in my case - the destination language is C++, there are some pitfalls of course. But because the runable Smalltalk model has partitioned the problem in static AND dynamic elements, the problems are small and solveable.

In fact, this is xM in the smallest possible scale....

In the future, I plan to create some ready-to-use moduls which are providing elements always used in our company. Radar data parser, or some kind of common displays or GUI elments are examples. Especially when combined with Seaside, this should result in a great Live UML (xM) Framework.

So Smalltalk is not dead, in the opposite, it is THE tool for robust and failure safe development !

14 February 2008

Rendering documents - support the semantics

The rendering of documents is only one part of the story. Another goal would be not only to build a tool for creating a document in the sense of set in in a certain form, but also to collect the right content for it. Now it is clear, that it is not possible today to build didactical efficient documents which are optimal for human reading and understanding. But if we not try single steps toward this direction, we will never come closer to the goal

So what can be done today ? Documents are answers to questions. Sounds trivial, but is the jumping point. So maintaining content means maintaining questions. So this could be a simple first step: manage not only content, but also manage the questions relateted to it and manage also the relations. The other thing are terms. Terms are the most basic but important building block of any content. In consequence, building term networks (and visualising them) would be another brick in the wall.

At this time, I am reading something about Husserls Phenomenology, a philosophy which looks (together with other things) how our understanding can come to see the nature of things. May be there are aspects which could be tried in the context of document rendering. And again, computer science will become practical philosophy

Of course, I also try to get an overview ofer things like semantic web.

30 January 2008

Rendering documents

Since many years I am interested in Requirement Engineering and documentation for the development process (especially UML). This needs of course to create documents. Often, to create documents they must be written by hand using some office software (like OpenOffice). That seems normal, but documents in a company environment must have a defined format, must contain defined things. On the graphical side - the UML part for example - the writer have to bother with good visible layout and design.
So I dreaming of a system which could render documents like Requirments, Testplans or even diagrams. Well, some bigger software on the CASE market can this, so it is possible to form Microsoft Office documents from data base entries. As far as I know, they are mostly based on templates. But I want my own flexible render engine, which works more with design rules and a meta model of documents than with templates. Or in other words, I want to stress the point of *rendering*.
Because all moderen office software uses XML for storing their documents, document rendering can be done in any language, and I have of course some special of them in mind (Erlang, Smalltalk.
At my new job, I have to create a lot of documents, so I hope I can build a little bit during my work.