Showing posts with label Software engineering. Show all posts
Showing posts with label Software engineering. Show all posts

04 March 2012

Black and White and the Grey between

Since some time, I'm thinking about Enriched Games (or Serious Games) in the browser. I like the idea of using SVG and JavaScript for this, or Processing. To bring SVG to life needs a lot of JavaScript code running in the browser, and the same is true for Processing, which is in its JavaScript implementation exactly this, a more or less big library also running in the browser. And looking at this, a big question raises the horizon: which function has to run in the browser, what should be responsibility of the server ?

That this question is not so trivial as it might seems at the first look can be explained considering the following technologies:

OnLive (TM) for example: the principle is to play games on small powered devices in the browser by rendering the images on the server and send them as video stream to the client. The client only notice the interaction of the player, sends the interaction events (and only them) to the server, which renders the next image etc. Advantage: the calculation power of the server is potentially unlimited. Effects are possible crossing the border of all what is possible on a single pc. Let us call this "Black"
On the other side, complex applications like Google Docs (TM) and Saleforce (TM) are complex client side applications. There are a lot more, using HTML5 , giving the impression of a Desktop computer in a single browser. Let us call this "White".

And then, remember the famous approach of OpenCroquet, a Smalltalk based technology: the borders of client and server are blurred, because an OpenCroquet system is a set of more or less equal nodes which exchanges not data, but computation. There is no central computation instance, every node gets the same result by applying the distributed computation. Complicated mechanisms ensure that the results are the same for every node. Advantage: data are distributed, and computation need not that big transfer power. And this is a truly "Grey".

And now, what ? Black, White, Gray ?

The problem with Black is the requirements for bandwidth and latency. The problem with White is the different levels of code: the server code delivers code which runs in the client. This creates a real complicated meta structure.

As a hypothesis, my answer is to serve objects. Real objects, in the sense that they are independent (except to their communication with other objects) and complete entities. This would break the meta structure of the White approach, and using the good things from "Grey". In addition, if you let the objects be distributed opens space for using techniques from the Black approach, if you imagine that a big part of the object calculation may be done at the server.

At the moment, this are just ideas, but in the context of my "Coffee Game" they will become more concrete and reality, I hope :) But I don't want to think only in terms of bandwidth ore language structure, I search for the right way to prepare for the time where we think only in functionality, not in clients, servers, browsers. The age of Cloud and Web is still in the dawn phase, there is plenty room for better things :)



21 October 2011

Chicken and Egg

In all this dry and serious technical world, the call for human aspects gains more and more attention. If it is really the case that human beeings have to create all these products born out from mathematics, logic and rationality then for sure we have to look at the poor human beeings in this process. How will they feel, will they like what they do, will they be happy ?

Lets switch to a chicken farm. The fate of the chicken is to produce eggs, a thing which seems naturally for them. Chicken dump eggs. So it may be that the chicken is happy about every egg because it is the sense of its life, and it sees this sense of life. And if the egg is there the chicken bothers about the egg, until this tiny little white thing changes to a young yellow chicken. Reached that, the chicken feels good, yes ?

Now, in the nasty modern chicken farms, where the chicken are doomed to sit very close together, nearly impossible to move or to see anything other then steel and other chickens, it is hard to imagine that the chicken will be happy. It cannot see its egg, because in the moment it is released from the body the egg is taken away. The chicken has no contact with its sense of life, and there are not the conditions to produce high class eggs: different food, fresh air, sunlight, movement and normal interaction with all other chickens.

The creative mind may have the idea to produce the best eggs ever by the most luckiest chicken ever. What lucky chicken, they can walk in an english garden, the earth worms are hand selected and at least 5 inches long, room temperature is very convenient, like a warm sunny day in spring about tea time. Also the chicken has not to lay any egg if it chooses to do so. In this lucky fate any egg which we get must be the best egg ever, nor ?

What I want to say is this: the key word is balance. Is all about the product, but also about the humans and the processes (including technology) to build that product. But no factor is more important than the other. Sometimes, I have the impression that in some discussions the human factor is overstressed. All factors are interwoven with the processes and laws, which have to be considered. Neither is software development to set up like a chicken factory, nor like the paradise for self fulfillment of software developers. The balance has to be hold.





12 October 2011

Tell me a story

Today, I've read architectural documents as preparation for a functional safety analysis. Such an analysis is performed to come to an assessment if the given system and its architectural elements fulfill the safety-related requirements. As you might imagine, for doing this work it is helpful to understand the effect chains and the overall operation of the architecture.

So, sitting there and trying to collect the facts spawning out of the pages into my brain, hoping to puzzle together an picture (the picture of software), one thought flashed in my mind: can architecture documents tell stories - and even should they ? (For those who read my last post, lets assume the Constructible Hypothesis is true).

Technical Systems are built from human beings (Well, at least at the moment) Because humans evolve day by day with growing experience causing shifting look at things and moving attitudes these technical systems have history. Humans learn about problems, learn about solutions and in consequence learn to evolve methods and tools.

But not only the creator has history, also the environment = the source of the problems is non-static. Customers have new wishes, technical or environmental constrains change. All this together make it undeniable that a technical product has history. For example, many things on modern ships have their roots in the old wooden sailers. The instrumentation of the airplanes are frozen history of aviation construction.

There is a move citation (I think it was "Big Trouble in Little China" with Kurt Russel): "Thats like all things begin: small and decent". Yes ! This is the power of knowing history: the first versions of things are often simple, not elegant or efficient, but small enough to not covering the basic idea behind a solution. Or better: in the beginning of technical things the intention of the architectural elements will be visible. Going back in history means going to the core part of a technical mechanism, the will of the creator. Going back means to get insight into the basic idea of things.

But there is more. We said technical things evolve. But they evolve not without reason. If technical properties or functions being added, that should be done because of a changed requirements ! So history should tell you a story about sound decisions, why this function or this effect added to the system, and why the old system was not good enough. Again, evolutionary steps have an intention behind them (hopefully !). This is far from trivial, so many team members don't know why software module x or parameter y were added. Look how many things are introduced and killed later on because the knowledge of their introduction got lost.

Now, knowing the history of a technical product seems good. But the question is now can an architectural documentation tell the history ?

Please don't think at the huge version trees of such tools like CVS or Subversion etc. It is not practical to write down a full history log, of course not. But if you see an technical product as a chain of design decisions, controlled by intentions, it might be more clear how to document:

show the basic problem and the basic idea, and show the most important intentions from the historic sequence of intentions shaping the architecture as it is today.

Showing intentions is not necessarily complicated: well chosen names and wording can suggest what the basic idea or problem was. Also, arranging the document along an level of detail gradient can have the effect to present the reader a time trip. AND: show the associated problems !

But my statement is this: just nailing down an architecture as it is in front of you is often not enough. Such documentation may be useful for rebuilding technical things only (schematics for example), but not to understand them. Or to say it much easier: show the human factor in the architecture.

I know, this is no precise plan how to write documents. This may be part of a later post :)


09 September 2011

Agile, Architecture and other things

There are questions about software development I'm really concerned about. These questions raised from my experience being a software engineer, a software architect, requirements engineer and team leader.

One expression of these question is the following problem:
  • How can software engineering become exactly this, engineering ?
This question is not trivial. But first we have to ask, does an answer exist at all ? If there is an answer, it should be possible to compare software engineering with other domains of engineering, say mechanical or architecture domains. Often I try to make mappings from properties of the mechanical engineering to some of software engineering. But the special way of the existence of software - to be not bound to physical resources and to have multiple instances without additional matter or energy - may have impact on the metaphysics of software.

Now, one of this special properties of software is their potentially unlimited effect. Where mechanical devices are limited by matter and energy, software is completely free, not bound, only limited by its runtime, the computer. We learned how fast computer extend their capabilities. In other words, the effects caused by a running software can be very very complex.

This observation raises this question:

  • Is it possible to construct technical systems of any complexity level ? (The Constructible Hypothesis) Or - as contra statement:
  • In general, can technical systems only be grown applying evolutionary principles ? (The Grown Universe Hypothesis)

If the first question has an positive answer, the question above (existence of engineering for software) also has. These both questions also touch another aspect of build things by human: the aspect of control. To construct things means, that the creator has control over every static and dynamic element in the cause chain. The cause chains are arranged by logic reasoning, using basic laws. Here would be the place for an engineering technique for software. But exactly here begins the area of fog, because we know that not every thing can be calculated which means that a full control of any complex system of any complexity level may not possible. If calculability is equal to controllability is an open question.

But if the Constructible Hypothesis would be not true, the only other way of building systems would be to let them grow and arrange solutions by them itself. The evolutionary principle means, to provide just environment conditions (or requirements), and let the system grow. Not necessarily by the system itself like Genetic Algorithms, enhancing a software revision by revision but by human teams is also some kind of growth and evolution. BTW, this would be a place where I see agile and lean methods useful.

If we assume that the Constructible Hypothesis is right, one may ask how we could handle this. One way may be the onion model of control: a few basic laws provide the foundation to build up simple entities and their properties (like atoms from the 4 basic forces and quantum theory, for example). These entities form own rules on their level, from which bigger entities are build with der own rules (molecules, like proteins) and so on (cells, body...). Every level has its own rules, but based on the preceding level, and narrowing the space of possible behaviors. DSLs are operating on the same idea.

Now look at a cellular automata. There are also very basic rules, which generate a special behavior of the automat over time. In the game of life for example, there are patterns coming out which itself show behavior. The "slider" is such a pattern, moving over the 2D grid of the cellular automat. But there is one big difference to the onion model of control: the slider has not its own rules, its behavior is fully and only determined by the basic cellular automata rules ! That means, to control the slider and higher level patterns is only possible by the basic cellular automata rules and the initial conditions. In the onion model of control, you would have constructive possibilities on every level (you could construct different proteins to create a special effect on this level, for example).

From this, it is clear that if the cellular automata model (or let it call after John Neumann the Calculating Universe Model) is true, only the Grown Universe Hypothesis can be true. Otherwise, if the onion model is correct, than both, the Constructible Hypothesis and the Grown Universe Hypothesis have the chance to be true.

So these questions arise:

  • For control behavior of human built systems, is the Onion Model of Control true ? Or
  • For control behavior of human built systems, is the Calculating Universe Model true ?

For me, these are important questions, even if they are seem not that pragmatic :) But it is an important question to investigate what we really can do and achieve in principle in the domain of software engineering. Of course this is no exact analysis, but it should illustrate what I'm thinking about.

If some has a hint about already available work or text from philosophers, computer scientists and so on I would be glad about a pointer :)



02 September 2011

Back to Programming - with Smalltalk

Since some years I've never written down one single line of code. Some reason was the change in my profession: starting as a software engineer, I currently work as Safety Engineer, a role which only writes many documents, but no code.

Starting this year, I'm also working on games for game based learning, or Enriched Games (better known by the not irritating term "Serious Games"). In this days of Web and Cloud, it is clear that my sample games have to run as web application. So the question arises, what platform or technology to use for this.

I've seen many programming languages. Haskell I like a lot, but needs sound knowledge of Category Theory to really unwrap its full power. Lua is interesting. Scala is interesting, but a little bit complex. Sometimes, I'm afraid that Scala is in danger to become the next C++. And then there is Smalltalk, my very old love.

It still holds that Smalltalk is a productive environment for me. Together with Aida/Web or Seaside, it is easy to build really fancy web applications. Well, Seaside can get very complex, I remember that I struggled many times about how to do this or that in Seaside. Today, the situation is far better, because there is a lot of documentation about it. And Smalltalk itself still has the very big advantage to be simple - in the environment as well as in the language. Anyway if you are a strong object monk or a functional evangelist, Smalltalk invites you to write down the solution in the way it is convenient for you.

In addition, the last ESUG has shown that there are interesting evolution efforts are ongoing with this old lady. Ok, I'm still missing comfortable remote programming (via web) and the capability of taking advantage of multicores. It seems that Smalltalk has still no strong answer to the coroutines, task pools, STM and whatever constructions introduced in other environments. I hoper there will be progress some day.

For me as one who want to concentrate to solutions, not studying libraries and complex design patterns which only hide the faults of the language design, Smalltalk a natural choice. Reflection and the Debugging on-the-fly make it easy to find out how given code (=libraries) work and to iterate toward the really useful and intended solution.

Smalltalk will not do all in my solutions. CouchDB and therefore Erlang will store data, communicating with XMPP to the Smalltalk application and maybe to other modules as well. But one thing is clear: Smalltalk is for me the best union of all concepts I like with the highest productivity I like. So I will use it - and Happy Smalltalking !

02 June 2011

Functional Safety of Software

Since 3 years, I'm working as a Safety Engineer. Starting in the Air Traffic Control domain, I now have to consider the functional safety of locomotives. So in parallel, my perspective is extended to physical, mechanical things, which is really cool.

Along with this evolution, my interest is focusing on the question how software has to be built in a functional safety related context. This is a question which strongly relates to the problem of how to overcome the human imperfection and avoid human errors in concept, architecture and in the whole technical realization of a system from the beginning. It is a question for methods, provability and well known and well tested engineering practice.

In consequence, today my view on modeling of requirements, system architecture and function networks as well as on the computer science itself has changed. I see now the challenge and the benefit of tool based software development in a different light. I see, that we need a information science, not a computer science. I understand, that we are still searching for the rules of information which correspond to the rules of physics in the mechanical engineering. Software Architects, Software Engineering, are terms with today expanded meaning.

Therefore, I want to add functional safety as theme for this blog, and this is well done, because metaphysics is the only hope to get further :) I will tell stories about communication from software engineers to safety engineers, the amusing stories as well as the horror at work.

So stay tuned :)

17 January 2010

Time to Change

In the last year, many things happened in my life as a computer professional. I am writing no code anymore, I am now working as a safety, quality and requirements engineer. And I am glad about it. Let me explain, why.

In the last years, more and more the doubt came up to me if programming is really that what I am strong in and if it is what I enjoy. Even working in different companies, where completely different software were developed, in Smalltalk, Fortran, C, C++ and and means, I always had the feeling that I doing the same frequently. The problems of software developing repeat, the kind of solutions were well-known. I met many people, all of their own style, I had bosses which could never be compared, which was and is good. But the game - software development - still was the same from my point of view. Different in colors, in tones, in details, but in fact the same. No big progress, as also some well-kown mastes of software told. And I had and have so much more interests.



I know, this picture is not objective, my losing interest and my perception of software buisness have impact on each other and are not decoupled. But which is really the cause of what doesn't matter in consequence. So I decided to take a way in my carrier which lead me away from software development. That's why I am happy to work as I told above.

But what about the hobby ? On the private side, I looked at many programming langauges, read about software processes, design patterns and new approaches in the software domain. But the doubts described above take place here again. And at the same time, old loves came back to me: philosophy, electromagnetics, teaching electronically, the question how a set of neurons can think and get insight.

I've just finished the great book "Coders at Work" of Peter Seibel, which contains interviews with some big persons of the software buisness. And the part with L Peter Deutsch, which leaves the software buisness at some late point in his life, convinced me to make this decision: even as a hobby, software development will no more the objective of my hobby activities. This means, to be precise, to look at patterns, architectures, languages for its own.

Of course, I will program to let my new ideas come to life, maybe games, graphics, experiments. But the ideas are now in front, not the question what is the best language, the best approach. I make no attempt anymore to make software development better (one exception: requirements and safety engineering). That let be the task of computer scientists, which I am not. Just be creative, as long it is possible.

For this goal, Scala and Processing had attracted my interest, which both working with the Java environment. The Java environment has so much possibilities. Erlang is nice for experiments in the artificial neural network field.

But don't misunderstand: I will not close my eyes. I will look how technology and software science is going on, because to track and judge the social impact of new things is important. But my role will be from now that of an observer, not an actor of the computer science.

22 April 2009

UML in the Functional Programming world

UML is a well known standard notation in the context of objectoriented design and programming. It allows to define classes and their relationsships. It makes it possible to display Use Cases and the Activities performed for them. They way how objects interact together can be visualized in more than one way.

The problem is: it's all about objects. But we are in the age of a new rising star: the functional programming. Here we deal only with functions. There is an input, there is an output, no state. Just a rule to map the input to the output. No behavior. Just a chain of input - output mappings. It seems that UML doesn't fit very well. Some languages interpret functions as objects, but in the end that doesn't help. Functions are a mathematical construct, they are best defined in the way we learned in our mathematics education.

Let's go a step back and ask: what is the goal if we use UML ? It is used during the design phase, so one goal might be to communicate architectures. An other goal might be to write down design elements of the software to build, on a level which allows to use abstractions and to suppress unnecessary details. Of course, abstractions and communication is not only about data structurs, but also about what is done with this data, and by which entities. And in the end this is the nature of using UML: finding out and document what is needed, what happens, and what is get.

Ups. Thats reminds us of functions. There must be something wrong. So, again, one step back: how should we document software ? Actually, the main purpose of any software (which is not used for controlling some device, because their nature is just to be an extension of mechnics) is to provide information. Information aggregated from other informations, as well as calculated, filtered, selected and transformed information. The nature of information is to be data, which have an intention: a question to answer, an goal to achive. If we describe the data and their intention, i.e. the information, and their origins and lifecycle, we have in fact described what a system is. It doesn't matter if the technical implementation is done by functions, or by interacting objects. Thats a matter of the detailed technical description, which is important only for a small set of people (and it has to be decided for every project if the code is the documentation or any detailed UML diagrams or equations are needed). Or in other words: it is in matter of level of detail.

Let us note two additional observations:
1) The data comming from or going into the real world can be modeled better by objects than functions.
2) The context as part of the system environment is the source of intention (what will I achieve, why I do something).

Bring this all together, we can see that UML helps to document the data and their intention and their lifecycle via UseCases (maybe annotated with activity diagrams, for context and intention), Collaboration diagrams (for lifecycles), Class diagrams (for the pure data). In this sense, UML should help document software independently of the programming paradigma used with a sufficient level of detail for the most cases. And if more detail is needed, UML is appropriate for the OOP and mathematical equations for FP (and PI calculus for heavy concurrency systems).

25 March 2009

Requirements: The Picture of Software

As we all now, all software development starts with the wish or the need of a customer. The customer has a problem, that he (or she) wants to solve. Then it might be, that the customer has also the idea of a possible solution in mind. "If I had .... I could..." or "if I get ... then I could" and other signs of such ideas is well known.

In most cases, this idea is embedded in the domain of the customer, it adresses data which are key parts of its domain. Or it is related to actions performed in this domain. In either way, it has no technical nature. (It may be that the customers domain contains technical things, computers, machines, but then they are part of the domain as all other and therefore are natural to him.)

Then there is the other case: the customer has no idea, just a problem. But then, he has at least a goal, which means there is an idea of a situation which he (or she) wants to achieve.

Now it is our job as software professionals, to draw a picture of a system, which, beeing embedded in the customers world, would solve his problem and let him reach his goal. If we have this picture and it its approved by the customer, we could build the system and the customer will be happy.

That sound's easy but it is not. The customer has a sound picture of his domain in mind, but a weak imagination of the new system (well, he is no technician...). And this new element must fit in. The software professional on the other side has a strong imagination of "his" new system (what a wonder...), which must be embedded in the - from his point of view - weak picture of the customer domain. It is easy to understand, that the weak parts are in danger of beeing forced to fit and therefore getting distorted. Now the problem is obvious: all parties have different parts of the picture that are weak. In consequence, the resulting overall picture of the customer and the software developer will differ necessarily.

We know, that an iterative process for software development is the best solution. By creating prototypes and mock-ups, things that can be "touched", the picture comes out of the head of all parties and can be experienced directly by each other in early project stages. And because it is not feasibile to come through this in one step, many steps are done, where ever news prototypes and mock things display and stress different aspects. But in the sum, over all iterations, the picture should become more and more equal for all parties. The picture of software becomes clear and sharp, we've got consensus. (BTW, this would be the planing guide for iterative processes: touch all constitutive aspects at least one).

Sounds easy. But software is developed in teams, and not all members of the team can talk with the customer directly. And more worse it is not always possible to talk with one customer, there may be many stakeholders. So the picture in mind must be communicated via documents, but how ?

To find an answer, think about a patchwork image, and the task is to fill a empty place. So the first question is: the shape. Which shape I need to fill ? Related to our software project, this would be the interfaces, logical, functional and physical. Then, one might look which colors would match, then which things the part should show to continue shapes or patterns of the whole picture, and so on. That means: for communication via a documents, one should take certain point of views to the domain and the new system, explain them and why the part fits in the hole. One chapter one point of view, with the goal, to come close to the picture of software.

If all important aspects are described like this, then there is a chance that many people which read the documents become a more ore less comparable picture of software (and its environment) in their minds and the right software at the end.

BTW: There are people, which can design such missing parts of a picture handling all aspects at once - we call them artists.

17 November 2008

Requirement landscape with Croquet

Requirement Engineering is something which I'm looking at for many years now. And I always was and be interested in this. But one thing I am still waiting for: the visualisation of requirements.
So one might ask: visualisation ? Requirements are texutal, semantic-rich objects, nothing which would come close to that what is needed to making charts or some interesting 3D pictures. Well, this may be right. But requirements have structure: they are related to other requirements, they build groups. In addition, they have an inner structure, if they are built using methods available for professional Requirement Engineering. Visualizing requirements related to such structure aspects would be the same as visualize the network or graph of requirements. Clusters or too sparse regions of the requirement network should be visible at once.
But I think, there could be done more. The set of requirements should describe as complete as possible the system to build or to engineer. This set as a whole should be able to induce the picture of software. By reading the requirements, one after one, this picture becomes existent in the mind of the reader, and its shape and details become more and more complete with every requirement read.

But: wouldn't it be nice to visualize exactly that process ?

In the result, this visualization would show which requirements covers to which extent which aspect of a system. For example, the system would be a landscape, or a city street plan, and the size and position of the requirements show there contribution to the whole (system).
Since a while, I asked myself if Croquet would be a optimal tool for such visualizations. The space metapher and even more the collaboration possibilities of Croquet seems promising for such visualization experiments. Unfortunatly, I have no time, so I really hope that some other will do some experiments in this direction.
This is I would do:

  • defining a meta-model of requirements, to adress the aspect of their inner structure, just like EMOF for conventional models. I mean not a semantic model, only a structural model (Conditions, quantors, action, object, subject, test criteria ...)
  • defining a meta-model for describing always needed aspects (=questions) for a software project, let's call it system meta model. You could imagine this as a checklist of questions need to be answered in most projects.
  • defining a semantic meta model, to describe the relations between semantic in general, requirement parts, and the system meta model.

That are only raw ideas. Too bad, I had no time.....

09 November 2008

Decisions of a Software Architect

In the past, it was not so clear for me what are the things which are constitutive for the work of a Software Architect. This is especially true if the question comes up how to distinguish a Software Architect from a Software Engineer. At the moment, my work for the company is made up of UML, pure architecture design and requirement engineering. From this, I remarked some things which may be surprisingly important aspects of the work of a Software Architect. I will call them "surprising", not because I never thought about them, but because their real effect and importance for the practice hit me strong in these days.

If a new system has to be made, it all starts with requirements. Everyone know that. Some people regard requirement engineering as collecting a bunch of text or a big set of items in a database in order to describe some properties of the system to build. For me, requirement engineering is much more: it is the search for and identification of the model describing the customer universe at best. Such a model must describe the processes and informations which constitute this universe and in which the new system will be embedded. We will see why this is important.

From theory I know about the value of Use Cases and UML Activity Diagrams in the context of requirement engineering, but I was surprised how useful they are to perform that requirement engineering as desired above. By bounding requirements to Activity Diagrams and Use Cases, it is possible to get an output of a requirement engineering phase which - beside the description of the properties of the new system - give an answer to the question, which actions are performed by who, which are the needed informations, which are the resulting informations and how the system to build will fit in this universe.

This all is the input for the Software Architect, and he must use it to find answers to this question:

Which are the parts of the system to build, and how must they interact ?

Well, this seems trivial, and it is even more easy if processes and information flow in the customer univers is known. But now comes a surprising aspect:

Which of this parts of the system are described at best by the data they use and compute, and which parts are described at best by the processes they perform ?

The answer to this depends not only on the properties of the future system, it depends strongly on the nature of the processes and information flow in the customer universe. The outcome of the architectural decision has very long-range consequences because it determines the complete design- and implementation process: a data-driven architecture has other characteristics than a process-driven architecture. Shure, data and processes are not independend. But for the technical design process it is different if one think first in terms of data or if one think in terms of a sequence of actions.

In the data driven world, the next decisions are related to questions like who, how often and in which way the data access will happen. In the process related world, the Software Architect has to decide about events, sequences and has to work with the time.

Beeing at this stage, the next suprising aspect, the next decision point has to be regarded, which as hughe impact to the design:

What are the points of possible change ?

Here, the Software Architect must invest phantasy and experience to identifiy the "weak" parts in the required system. He must look for the things which may be not so fix as the requirements suggest, and as a result, he must decide what things of the system (roles, data, actions) must be designed in a way that a change would be easy. This is only possible if the requirements describe more than the system itself. I think that the best sources to find such points of possible changes are the relations of the system to the universe of the customer.

And here, we are at the heart of the work of an Software Architect, as I belive it: find a design, which make change in the forseeable way easy. This gives the material to make further design decisions, it helps to reduce the set of possible solutions for a design task, to make the system that what is every time told to us: make it simple, but not simpler. Design for change, but only for that, which will be needed. The power of design for change is that what me really surpised ....

Now the detail work can start. But one thing remain, which I often discussed with colleagues: what about technical constraints ? Must an Software Architect know about the possibilities and properties of certain technical solutions ? Well, at this stage, my assumption is that he must not. In the opposite: the Software Architect must find a design, which can adapt certain techniqal constraints. So the decision point would be:

How can the design made robust against the technical constraints and implementation ?

If writing an required amount of data on hard disk is not possible by simply writing to one single file, an architecture is needed which handles multiple files. The Software Architect can not know all detailed constraints which comes from the many-years experience of a software developer, or from a data base specialist, a web specialist etc. In addition, such constraints are change points "from the bottom". Yes, the architect must have a rough knowledge of what is possible with today techniques, but only in order to be not too unrealistic. Therefore, a Software Architect must keep contact with specialists . By the way, here I also see the border to the role Software Engineer: this one must have detailed technical knowledge, and therefore may be responsible for the "lower" or detailed parts of an architecture. He has to bother with detailed technical details.

So, as a summery this decisions help to come to architectural decisions:

- What are the roles, and the parts of an system, and how interact they ?
- Which parts are data driven, which are process driven ?
- What are the points of possible change in the customers universe?
- What are the points of possible change from the technical implementations ?


Of course, it will not be all of the Software Architect, but it really helps ;-)

29 June 2008

Taking new point of views

In the last view weeks, some new things came up which pushed me to new point of views on the world of software.
The first is the iPhone from Apple. I tried out the SDK, and I like it very much. Old feelings and memories came up from the time I wrote my software for my Diploma thesis on the NeXTStep. I like Objective-C, I like the graphics model (Display Postscript then, Qwartz or Display PDF now).... it is an exicting way to realize ideas.
But the property "new point of view" is not a consequence from the quality of the SDK or the language. The iPhone and its new way of user experience is the guide to look different to User Interfaces. Multitouch offers new ways of interactive work with data, and the limited screen size forces to reduce the optical design down to what is really needed. For example, I observed that for some tasks the iPhone is my favorite tool: there are some web sites which are optimized for iPhone (or mobile phones) so that they are more comfortable to use with such devices. And as a result, using the iPhone I only get the relevant things are displayed instead of the information flood presented on many current web pages when using the computer - and I hope there will be a lot of such optimized web pages in the future.
These two constraints - show the relevant in a small screen in a aesthetic (!) way - and make the data or the documents itself the objects of manipulation - are the reasons to decide to work in future with the iPhone and the SDK.
One remark: of course these things are not complete new, theoretic papers or books (like the one from Raskin an other work) described such problems and solutions already. Not to forget the original idea from the DynaBook of Alan Kay. But now it has practical existence, there is an interesting device, you can play with its hard- and software.

The other thing giving me a new point of view has to do with my work at the company. Here I got some insight in the field of "safety software analysis". Working in this area means to look at possible hazards, which can be caused by the software and how strong such a hazard would be if it happens. Then, in dependency which international or national standard for safety must be applied, the analysis starts to determine which probability of failure must be required from software modules or even single functions, if the hazard should happen with a given probability at most.
All these things have to do with cause chains, probability and analysis and modeling of software. Definitively my interest, but in the sum for me also a new point of view on software :-)

11 May 2008

Design, everywhere !

We are living in the age of Web 2.0. This can be seen because the nice designed Web is growing fast. CSS and modern Web Frameworks like Seaside, Ruby an Rails or Zope give people the option to match aesthetic needs and expectations. Even standard GUI frameworks provide graphical power that a real designer is needed to use this power to gain the best result. The time is gone where the limited, gray and strong shaped windows and dialog boxes been there and to which we believed to be accustomed to the last 15 years. Today we have designed Web interfaces, designed reports, designed user interfaces.

However, the word "design" faces us not only in relation to this "outer" things, the immediatly visible parts of software. Some people also think about software design, architecture design, system design and so on. But in this case, not the aesthetic aspect is adressed, these terms belonging to the inner part of the machine, the functionality, its building blocks of programming. All the things an engineer is called to bother with, not an designer.

The suprising fact is, that even GUIs are not well designed if their functionality is bad. Every developer beeing in contact with customers know this very well. In fact, all User Interfaces need both, aesthetic and usability. In consequence, I simply ask: can it be that the architecure design must have aesthetic aspects ?

The short answer is: yes ! All the years working as software developer in many different companies had teached me one thing: aesthetics is a good advisor when judging an architecture about it technical quality. As all measurement engineers know that the human eye can draw a mean line through data points very accurate, the human perception for aesthetics is very robust. But the problem is, that is is not very explainable, what "beautiful" or "ugly" is, and it is far more less explainable in what aspects "beautiful" design of a architecture guaranties a technical plus for its functionality.

So there is a mapping to be defined, which correlates aesthetic principles which technical creation rules. The goal of such a mapping would be to understand more of the mechnisms of technical design and to be able to teach it. Some simple mappings are well known, though. Symmetry is often a design gool in aesthetics as in technical design. Another is rich simplicity (about this therm I will think later). But there are a lot more, which many of us uses without beeing conscious of them.

But there is help: at the Software Composition Group at the university at Bern (Switzerland), there is a tool developped called Code City. With it, it may be possible to visualize a software such that the aesthetic senses can be used. So this tool is a translator between the technical domain and the aesthetic domain. Of course, this is only one possible solution for this mapping, but it is one we have now ;-). This tool is positioned in the context of the Moose I mentioned in other posts, and because there are efforts to port Moose to Squeak, I will hope that I can use all the other things related to Moose soon :-)

02 March 2008

Live UML - explorative Modeling synchronizing

One more point on modeling with Smalltalk: now, if there is a first implementation of the application starting from the Smalltalk live model as described in the last post, it must be developed further to cover all the necessary details. Theses details popping up from requirements which were unimportant for the basic model or they are caused by the translation from the Smalltalk environment to the destination language. As a result there is an runable implementation of the original live model in the destination language which nearly provide all what the requirements demand and which also matches all demands of the destination environment. So far, so good.

But there is one drawback: the implementation is certainly drifted away from the Smalltalk model, this and the implementation are never synchronized. That would not a problem for itself, but of course, the implementation is not perfect. It has bugs, or during the implementation new ideas or problems popping up (every environment has its own pitfalls :-) Or new requirements comming into view, of course ;-). And then there is a decision needed: resynchronize the Smalltalk model with the implementation, and decide about these new requirements, problems etc. there, or throw away the model and work only with the implementation ?

Well, it depends :-) But I have set up some simple rules for making this decision more easy:

  • Don't leave the model too early. There is certainly a time, where it makes no sense to keep the live model up-to-date. The most destination systems have big differences to Smalltalk, and if one have to bother about to many details in Smalltalk, it is time to leave it. But I observed in many cases this point in time can be delayed. The model is a good tool even if the first code in the destination environment exists.
  • Don't ignore too many details. Of course the nature of modelling is to ignore some details and to focus on the building blocks of a problem. But to make the model too easy forces the necessarity to keep the model synchronized much longer then really needed. Therefore, it makes also sense to anticipate and model some things which are strongly related to the destination plattform. To say: "oh, in Smalltalk I have not this problem" makes the model too easy sometimes. Also keep track on the interfaces in the destination platform, and model them by mock objects or something similar. To ignore input and output makes the model too easy.
  • Don't implement twice. If you have the application in Smalltalk and in the destination environment, you had leave the model too late :-) . Some details, which are not part of any model, which are only things of the destination languange, which are not essential to data structures or input output, must be solved directly in the destination environment.
For a successful result, one have to gain experience for the best level of detail and the best time point to leave the model. So, I will just using it, to get this experience

24 February 2008

Live UML - Explorative Modeling in the small

Looking at the applications programmed in the world, and looking at the programming languages used, the situation seems not positive for Smalltalk. But for me, Smalltalk is not dead. In the opposite: for me it has growing value, although my daily life is determined by C and C++. How can this be ?

If I have to build a new application or a new module, today I use Smalltalk to design the architecture. In general, if one has to do something new, classes must be identified, what they are doing, which data they work on and more. Extensibility (for new requirements) must be kept on focus, and that there will come an interface into life which could be understand and used by others. Here can help Smalltalk a lot: a few objects doing something is written in short time. Every rearrangement and new ideas can be implemented quickly. Because Smalltalk allows to modify code even in debugger, and allows to evaluate any code snipped at any place - the right design grows in a way also evolution works: simple try it. It is growing in the best sense.

At the end, I got a set of objects, a set of methods and communication network (who talks to who), which can carry the basic functionality. From this, it is easy to get UML - for communicating with managers - and the implementation in the destination language. Now, if - as in my case - the destination language is C++, there are some pitfalls of course. But because the runable Smalltalk model has partitioned the problem in static AND dynamic elements, the problems are small and solveable.

In fact, this is xM in the smallest possible scale....

In the future, I plan to create some ready-to-use moduls which are providing elements always used in our company. Radar data parser, or some kind of common displays or GUI elments are examples. Especially when combined with Seaside, this should result in a great Live UML (xM) Framework.

So Smalltalk is not dead, in the opposite, it is THE tool for robust and failure safe development !

14 February 2008

Rendering documents - support the semantics

The rendering of documents is only one part of the story. Another goal would be not only to build a tool for creating a document in the sense of set in in a certain form, but also to collect the right content for it. Now it is clear, that it is not possible today to build didactical efficient documents which are optimal for human reading and understanding. But if we not try single steps toward this direction, we will never come closer to the goal

So what can be done today ? Documents are answers to questions. Sounds trivial, but is the jumping point. So maintaining content means maintaining questions. So this could be a simple first step: manage not only content, but also manage the questions relateted to it and manage also the relations. The other thing are terms. Terms are the most basic but important building block of any content. In consequence, building term networks (and visualising them) would be another brick in the wall.

At this time, I am reading something about Husserls Phenomenology, a philosophy which looks (together with other things) how our understanding can come to see the nature of things. May be there are aspects which could be tried in the context of document rendering. And again, computer science will become practical philosophy

Of course, I also try to get an overview ofer things like semantic web.

30 January 2008

Rendering documents

Since many years I am interested in Requirement Engineering and documentation for the development process (especially UML). This needs of course to create documents. Often, to create documents they must be written by hand using some office software (like OpenOffice). That seems normal, but documents in a company environment must have a defined format, must contain defined things. On the graphical side - the UML part for example - the writer have to bother with good visible layout and design.
So I dreaming of a system which could render documents like Requirments, Testplans or even diagrams. Well, some bigger software on the CASE market can this, so it is possible to form Microsoft Office documents from data base entries. As far as I know, they are mostly based on templates. But I want my own flexible render engine, which works more with design rules and a meta model of documents than with templates. Or in other words, I want to stress the point of *rendering*.
Because all moderen office software uses XML for storing their documents, document rendering can be done in any language, and I have of course some special of them in mind (Erlang, Smalltalk.
At my new job, I have to create a lot of documents, so I hope I can build a little bit during my work.