In the last post, I sketched the idea of a graphics subunit, which just takes render tasks describing *what* to draw, an which has the full responsibility and specialisation for *how* to draw. An graphics subunit independent in this way would have many implications, and the technology necessary to implement it goes beyond pure geometrical or mathematical techniques. Knowledge and learning of machines will also play a important role for its implementation.
Now, the questions is how we could come closer to this goal. One interesting path may be to flip the processing chain of vision: instead of recording images by an eye (camera) and process them like the humen vision system does, let us drawing pictures by using remembered pieces of images. In the wide area of computer science research
there exists already a field called image-based rendering, which tries to create renderings out from images recorded by cameras. But as extension to this more more less geometrical way of thinking, I would like to add the approach of the flipped human vision chain to the term image-based rendering (because it is simple the best term).
Where comes Processing on the scene ? Well, it is a big step to model the human vision system. One powerful and promising approach is the Hierarchical Temporal Memory Method introduced by Jeff Hawkins and his company Numenta, Inc. But for the first step, it would be too complicated, even if the underlying principles are not. I would prefer to start with small experiments, done in Processing.
One of the first experiments will be to create an Processing application, which draws some graphics with its own dynamics. It is not the objective that this graphics shows anything special. Then, I will send this application, lets call it the "Drawer", signals in form of data packages, generated by another application (Haskell or Erlang). The Drawer should then react, change its drawing dynamics according to the signal received. I want to investigate how this system can be set-up in a way that wanted graphics can be drawn by providing selected signals. BTW, the signal path is planned to be in XMPP.
The next step would be to add memory, where the course will be taken to the flipped vision processing chain. Processing is a very good tool for this, especially since it is possible to use Scala. So let's start !