Today, computer graphics is done by making geometry. Shapes like circles and rectangles, lines are placed in a canvas tagged with a (rectangular) coordinates. Colors and length are prescribed or calculated by providing numbers. In the 3D domain, the things work similar. Raytracing and other 3D to 2D renderings are based on geometric optics. Is there any problem ? Well, I will explain what I think about.
Since years, I was fascinated when Mr. Data in the Star Trek movies and serials said to the computer: "Computer, show me", and the computer draws the desired information in the best way, without being told to stretch, zoom or even use coordinates. From such scenes, I always got the vision of a computer kernel, which calculates, deduces, collects, and a graphical subunit, which does all the graphics. And the important thing for this is, both, the computer kernal and the graphics subunit, only *talk*. No API calls. In order to illustrate this, here is some example dialog:
"Hello Graphic Subunit, please display planet Venus, a starship type Klingon fighter in standard orbit"
"Hello Graphic Subunit, please display this 2D point set in a chart and this text"
In fact, the collaboration of the computer kernel and the graphics subunit should be the same as a customer which goes to an artist and says "Artist, paint a picture of me, embracing my power and glorious". Although many questions arise from this, I only want to point out the fact, that just the artist (the graphical subunit) has to bother about information which affects *how* the picture is drawn. He is the expert for graphics. The customer (the computer kernel), only should tell *what* to draw. In todays technical world, mostly the webserver or an application core has to deal with geometry and rendering. Of course, today graphics is described in abstract coordinates, presentation and content is divided by HTML and CSS. But that doesn't change the fact, that too many aspects of graphics and geometry are part of the application core. The latter has to call API by providing shapes, coordinates, colours.
Our technical possibilities are not powerful enough to come just closer to what I described above. At this point, I have to state a fundamental criticism: as far as our graphics technology is only restricted to geometry, it never will be powerful enough. Here is no place to provide reasons for this hypothesis, but it is my strong belief. To prevent misunderstandings, is important to note that, if I say geometry, I mean the mathematics as it is known and practiced today. I think it would extend our possibilities if we investigate more in things like image based rendering and to flip the human vision processing into the opposite direction. Vision then becomes rendering.
Well, I know, these are big mind steps and not a smooth chain of arguments, it is more a set of ideas. But anyway, this popped up some ideas to me for doing some experiments with Processing. The post is long already, so in a later post I will sketch some details.