21 October 2011

Chicken and Egg

In all this dry and serious technical world, the call for human aspects gains more and more attention. If it is really the case that human beeings have to create all these products born out from mathematics, logic and rationality then for sure we have to look at the poor human beeings in this process. How will they feel, will they like what they do, will they be happy ?

Lets switch to a chicken farm. The fate of the chicken is to produce eggs, a thing which seems naturally for them. Chicken dump eggs. So it may be that the chicken is happy about every egg because it is the sense of its life, and it sees this sense of life. And if the egg is there the chicken bothers about the egg, until this tiny little white thing changes to a young yellow chicken. Reached that, the chicken feels good, yes ?

Now, in the nasty modern chicken farms, where the chicken are doomed to sit very close together, nearly impossible to move or to see anything other then steel and other chickens, it is hard to imagine that the chicken will be happy. It cannot see its egg, because in the moment it is released from the body the egg is taken away. The chicken has no contact with its sense of life, and there are not the conditions to produce high class eggs: different food, fresh air, sunlight, movement and normal interaction with all other chickens.

The creative mind may have the idea to produce the best eggs ever by the most luckiest chicken ever. What lucky chicken, they can walk in an english garden, the earth worms are hand selected and at least 5 inches long, room temperature is very convenient, like a warm sunny day in spring about tea time. Also the chicken has not to lay any egg if it chooses to do so. In this lucky fate any egg which we get must be the best egg ever, nor ?

What I want to say is this: the key word is balance. Is all about the product, but also about the humans and the processes (including technology) to build that product. But no factor is more important than the other. Sometimes, I have the impression that in some discussions the human factor is overstressed. All factors are interwoven with the processes and laws, which have to be considered. Neither is software development to set up like a chicken factory, nor like the paradise for self fulfillment of software developers. The balance has to be hold.





16 October 2011

Hello, Siri

Apples Siri is from my point of view the most underestimated thing at iPhone4s. It fact, it could be the start of another change touching everything. Although a good working speech recognition has still some fascination, we are got used to this the last years. Automatic call center and the speech command in my car are well known examples.

There are two other properties of Siri, which are special: using knowledge and using context. You have no longer talk some isolated commands like "Radio" "on". You can talk like you would do with a person - a beginner or child - but a person. And to the second, this "person" has knowledge and context awareness. By using data from location services, database, documents etc. it is possible to provide interpretation of what the user said. It is a first step in a much bigger world, the artificial understanding. When speech recognition is our planet, than artificial understanding is like offering the milky way to us.

For me its not the matter if Siri work exactly like described above or if there are many problems with it. It is the start of a new way to work with artificial intelligence. It is a hypotheses of mine that the universe works probabilistic. I belief that the physics we use today - with solutions of algebraic equations - is just the recognition of an average or of limits. In Quantum Theory and statistical thermodynamics the probabilistic nature is directly observable. And my strong belief is that the brain is - as far it relates to knowledge processing and control - and statistic engine, were one brick of it is association and pattern recognition.

Collecting and correlating knowledge pieces - which we can call documents - and building knowledge from it is the thing which really impresses me on the IBM Watson project. And for this judgement, it is irrelevant that Watson is built on brute force and a collection or hand-made optimizations for special problems. We come closer to a theory describing this all with time.

The important thing is this: Siri (I think) and Watson are knowledge processors in the sense above, and move our view to the probabilistic nature of the universe, also shifting our paradigm what computers can be good for, maybe even the paradigm how to use this huge knowledge and database called Internet.

This is like all things begin: small and decent ;)

12 October 2011

Tell me a story

Today, I've read architectural documents as preparation for a functional safety analysis. Such an analysis is performed to come to an assessment if the given system and its architectural elements fulfill the safety-related requirements. As you might imagine, for doing this work it is helpful to understand the effect chains and the overall operation of the architecture.

So, sitting there and trying to collect the facts spawning out of the pages into my brain, hoping to puzzle together an picture (the picture of software), one thought flashed in my mind: can architecture documents tell stories - and even should they ? (For those who read my last post, lets assume the Constructible Hypothesis is true).

Technical Systems are built from human beings (Well, at least at the moment) Because humans evolve day by day with growing experience causing shifting look at things and moving attitudes these technical systems have history. Humans learn about problems, learn about solutions and in consequence learn to evolve methods and tools.

But not only the creator has history, also the environment = the source of the problems is non-static. Customers have new wishes, technical or environmental constrains change. All this together make it undeniable that a technical product has history. For example, many things on modern ships have their roots in the old wooden sailers. The instrumentation of the airplanes are frozen history of aviation construction.

There is a move citation (I think it was "Big Trouble in Little China" with Kurt Russel): "Thats like all things begin: small and decent". Yes ! This is the power of knowing history: the first versions of things are often simple, not elegant or efficient, but small enough to not covering the basic idea behind a solution. Or better: in the beginning of technical things the intention of the architectural elements will be visible. Going back in history means going to the core part of a technical mechanism, the will of the creator. Going back means to get insight into the basic idea of things.

But there is more. We said technical things evolve. But they evolve not without reason. If technical properties or functions being added, that should be done because of a changed requirements ! So history should tell you a story about sound decisions, why this function or this effect added to the system, and why the old system was not good enough. Again, evolutionary steps have an intention behind them (hopefully !). This is far from trivial, so many team members don't know why software module x or parameter y were added. Look how many things are introduced and killed later on because the knowledge of their introduction got lost.

Now, knowing the history of a technical product seems good. But the question is now can an architectural documentation tell the history ?

Please don't think at the huge version trees of such tools like CVS or Subversion etc. It is not practical to write down a full history log, of course not. But if you see an technical product as a chain of design decisions, controlled by intentions, it might be more clear how to document:

show the basic problem and the basic idea, and show the most important intentions from the historic sequence of intentions shaping the architecture as it is today.

Showing intentions is not necessarily complicated: well chosen names and wording can suggest what the basic idea or problem was. Also, arranging the document along an level of detail gradient can have the effect to present the reader a time trip. AND: show the associated problems !

But my statement is this: just nailing down an architecture as it is in front of you is often not enough. Such documentation may be useful for rebuilding technical things only (schematics for example), but not to understand them. Or to say it much easier: show the human factor in the architecture.

I know, this is no precise plan how to write documents. This may be part of a later post :)


09 September 2011

Agile, Architecture and other things

There are questions about software development I'm really concerned about. These questions raised from my experience being a software engineer, a software architect, requirements engineer and team leader.

One expression of these question is the following problem:
  • How can software engineering become exactly this, engineering ?
This question is not trivial. But first we have to ask, does an answer exist at all ? If there is an answer, it should be possible to compare software engineering with other domains of engineering, say mechanical or architecture domains. Often I try to make mappings from properties of the mechanical engineering to some of software engineering. But the special way of the existence of software - to be not bound to physical resources and to have multiple instances without additional matter or energy - may have impact on the metaphysics of software.

Now, one of this special properties of software is their potentially unlimited effect. Where mechanical devices are limited by matter and energy, software is completely free, not bound, only limited by its runtime, the computer. We learned how fast computer extend their capabilities. In other words, the effects caused by a running software can be very very complex.

This observation raises this question:

  • Is it possible to construct technical systems of any complexity level ? (The Constructible Hypothesis) Or - as contra statement:
  • In general, can technical systems only be grown applying evolutionary principles ? (The Grown Universe Hypothesis)

If the first question has an positive answer, the question above (existence of engineering for software) also has. These both questions also touch another aspect of build things by human: the aspect of control. To construct things means, that the creator has control over every static and dynamic element in the cause chain. The cause chains are arranged by logic reasoning, using basic laws. Here would be the place for an engineering technique for software. But exactly here begins the area of fog, because we know that not every thing can be calculated which means that a full control of any complex system of any complexity level may not possible. If calculability is equal to controllability is an open question.

But if the Constructible Hypothesis would be not true, the only other way of building systems would be to let them grow and arrange solutions by them itself. The evolutionary principle means, to provide just environment conditions (or requirements), and let the system grow. Not necessarily by the system itself like Genetic Algorithms, enhancing a software revision by revision but by human teams is also some kind of growth and evolution. BTW, this would be a place where I see agile and lean methods useful.

If we assume that the Constructible Hypothesis is right, one may ask how we could handle this. One way may be the onion model of control: a few basic laws provide the foundation to build up simple entities and their properties (like atoms from the 4 basic forces and quantum theory, for example). These entities form own rules on their level, from which bigger entities are build with der own rules (molecules, like proteins) and so on (cells, body...). Every level has its own rules, but based on the preceding level, and narrowing the space of possible behaviors. DSLs are operating on the same idea.

Now look at a cellular automata. There are also very basic rules, which generate a special behavior of the automat over time. In the game of life for example, there are patterns coming out which itself show behavior. The "slider" is such a pattern, moving over the 2D grid of the cellular automat. But there is one big difference to the onion model of control: the slider has not its own rules, its behavior is fully and only determined by the basic cellular automata rules ! That means, to control the slider and higher level patterns is only possible by the basic cellular automata rules and the initial conditions. In the onion model of control, you would have constructive possibilities on every level (you could construct different proteins to create a special effect on this level, for example).

From this, it is clear that if the cellular automata model (or let it call after John Neumann the Calculating Universe Model) is true, only the Grown Universe Hypothesis can be true. Otherwise, if the onion model is correct, than both, the Constructible Hypothesis and the Grown Universe Hypothesis have the chance to be true.

So these questions arise:

  • For control behavior of human built systems, is the Onion Model of Control true ? Or
  • For control behavior of human built systems, is the Calculating Universe Model true ?

For me, these are important questions, even if they are seem not that pragmatic :) But it is an important question to investigate what we really can do and achieve in principle in the domain of software engineering. Of course this is no exact analysis, but it should illustrate what I'm thinking about.

If some has a hint about already available work or text from philosophers, computer scientists and so on I would be glad about a pointer :)



03 September 2011

GDC in Cologne 2011 Special observations

This year I had the opportunity to go to the GDC Europe and GDC as a press representative. This was very interesting, I've spoke to many cool people, an I've made unexpected observations. Like this one: there were so many iPads, especially used by media people. Of course the notebooks could still be found and their number was not small, but you got the feeling that nearly every one use an iPad. Some people used it as a device for writing AND photographing :)

Beside the public halls which were full and loud even at the professiona-only day (!), I also visited the business center. There, I found a association for supporting game related business contacts to Iran, UK, Netherlands, Abu Dhabi and more. So gaming is really matter of the whole world, which is very very good, I think ! Also, I visited the booth of GAME, the german association for gaming industry.

That gaming can be more than "just playing" (which is itself a value, but not what I want to point out here), it can be used for learning (see my article in the current issue 9/11 of german computer magazine iX, or the web sites seriosgames.org, serios play conference and EDUCAUSE), it can be used to bring sensitivity and money to humanitarian projects. At the GAME booth, I could speak with Jasmin Kassner (CTO) and Kaspar van Treeck (CEO) from the Berlin based company ChawaChawa UG. They do this in a charming way: players play to gain goods, which are relevant for some humanitarian project. In order to give these goods a financial value, they are paid by the money coming in by selling places for ads. Because this ads are in the context of good causes (the games, the portal...) and because online games are a huge user group, placing ads there is a value for companies. But the real charming thing is that it makes humanitarian projects and problems obvious. I could imaging that putting a little bit the game based learning below this there would be a big chance to educate young people (and managers, btw......) about the real problems in the world. Btw, an other good example for this is the game Foodforce of the UN
I think, to find a good balance in this triangle of ads business, humanitarian projects and players is challenging, but definitively worth to try it. Humanitarian projects still needs more attention in our world!

p.s. Thanks Jasmin and Kaspar for your time, I wish you great success for this fine idea!


02 September 2011

Back to Programming - with Smalltalk

Since some years I've never written down one single line of code. Some reason was the change in my profession: starting as a software engineer, I currently work as Safety Engineer, a role which only writes many documents, but no code.

Starting this year, I'm also working on games for game based learning, or Enriched Games (better known by the not irritating term "Serious Games"). In this days of Web and Cloud, it is clear that my sample games have to run as web application. So the question arises, what platform or technology to use for this.

I've seen many programming languages. Haskell I like a lot, but needs sound knowledge of Category Theory to really unwrap its full power. Lua is interesting. Scala is interesting, but a little bit complex. Sometimes, I'm afraid that Scala is in danger to become the next C++. And then there is Smalltalk, my very old love.

It still holds that Smalltalk is a productive environment for me. Together with Aida/Web or Seaside, it is easy to build really fancy web applications. Well, Seaside can get very complex, I remember that I struggled many times about how to do this or that in Seaside. Today, the situation is far better, because there is a lot of documentation about it. And Smalltalk itself still has the very big advantage to be simple - in the environment as well as in the language. Anyway if you are a strong object monk or a functional evangelist, Smalltalk invites you to write down the solution in the way it is convenient for you.

In addition, the last ESUG has shown that there are interesting evolution efforts are ongoing with this old lady. Ok, I'm still missing comfortable remote programming (via web) and the capability of taking advantage of multicores. It seems that Smalltalk has still no strong answer to the coroutines, task pools, STM and whatever constructions introduced in other environments. I hoper there will be progress some day.

For me as one who want to concentrate to solutions, not studying libraries and complex design patterns which only hide the faults of the language design, Smalltalk a natural choice. Reflection and the Debugging on-the-fly make it easy to find out how given code (=libraries) work and to iterate toward the really useful and intended solution.

Smalltalk will not do all in my solutions. CouchDB and therefore Erlang will store data, communicating with XMPP to the Smalltalk application and maybe to other modules as well. But one thing is clear: Smalltalk is for me the best union of all concepts I like with the highest productivity I like. So I will use it - and Happy Smalltalking !

24 June 2011

The base for functional safety: Requirements

There are several myths around functional safety. So it is said that functional safety processes let the project costs grow. As it is true for so many things in the software or system lifecycle, if functional safety activities are not full integrated in all steps of the lifecycle, then it does it more worth than good.

So surprisingly, functional safety starts with requirement engineering ! Requirements are describing the system as it should be, and therefore, they are the base for verification and validation of the system. Roughly said, verification means to demonstrate that the system fullfil the requirements. Simple consequence: no requirements, no verification possible.

Validation means to demonstrate that the requirements are stated such that the system is usable as intended in its operational environment. But to check this doesn't make any sense if the system could not be verified. Conclusion from a wrong preposition gets nothing.

Later activities of the functional safety identify risks which arise if the system goes operational, and which additional functions the system are needed to mitigate the risks down to an acceptable level. Example: a cutting machine induces the risk that you get you hand cut off if you pull it in in the wrong moment. So the mitigation would be a protection system that would stop the machine immediately if any body part crosses a safety border.

Such additional system functions or properties will be noted as additional requirements. And checking if the system is safe includes to verify these safety requirements.

Conclusion: much effort of the functional safety activities can be done efficiently if there is a good requirement engineering in place. It helps to verify and validate the system according the safety functions and the overall safety and to render which are the safety relevant parts of the system.

But how requirements stated will be issue of another post, because this is very important for the value of requirements.

Enjoy !