Yet another boring post, this is not for you! leave! :-)
This post will be more of the interest of another artists working on genetic algorithms, or even digital artists in general; not so much for the general public.
These days I've been trying to improve the 3D visual capabilities of my software, and some other functionality; it was a dream of moving along with the possibilities that the software I use offers me at this moment. Now that I have some time due to the winter vacations here in the south (two weeks) I seated for several hours a day to try to move forward. So I annoyed two of my professors at the University where I attend, to help me with the source code, since I am not good programming, I feel will never be, and also forget easily anything learnt in O.O.P. (Horacio and Gerardo, thanks!).
So, first I had this idea about improving the 3D features enough to start making images of "sculptures" one day, made totally with genetic algorithms eventually, the shapes, the colours, the position, everything except the camera, the lighting, and eventually the background sometimes (optional). And from then on, a dream of starting to move into organic art, and become an organic artist eventually. So with time and effort, I guessed like a year and a half of work to get decent 3D results :-) may be a year at least. I am amazed of the fast evolution of the plan...
I might start to show half-way results:
First, I and them, my professors, tried to translate one function (between the functions we have defined to use in IGA's when producing 2D images) to become a new class now: the "Function3D" was converted into a class, also we improved this class A.M.A.P., this were the very early result, not much for 3D looking:
This first one, looks like a very simple biomorph with wrong colours and some basic fractal background, ha? Strangely when we did that with functions and classes, we found that we were visually going "backwards" in our results :-) looking like antique math based visual art... strange. Another example, a little better achieved:
Then we added functions to the 3D shapes, not only for colouring but also to influence their shape too; to become fully genetic algorithms driven shapes and colours:
then, I noticed two things; the positive: something interesting might arise from animating this strange kind of life (it is the next logical step, and renowned people have worked in biomorphs, automata, neural networks, L-systems and GA animation: Sims, Rooke, Ostman, Latham, Anderson); the second thing I noticed, negative: images lacked light and shadow to became more "realistic" materials, more than simple binary computer colours, look how the fake 3D shape of the three images before, becomes a flesh-like shape later:
The best result, in this stage of developing, I think it was this slightly 3D fractalish one, named "strange flora", this one:
As I was the one interested, then I was the one who had to think the general plan to get the more appropiate results regarding to my own needs; so I saw that if we kept working over making a translation of common functions to 3D simulation, or over creating new classes out of another previous 3D functions, we were going nowhere really. I realized that we needed one different plan, and a rendering engine, to put the processed math values into "decent" images.
So I started to plan again. I noted that the software may need a second branch from this point on: one software for 2D images like the ones I post on this blog, and another software for 3D images, may be to show in another blog, because 3D images are more slower to produce and give much lesser space for similarity-driven recognition to the human eye. Meaning it will imply another kind of artistic work, more technical, less human/artistic in the common sense of the concept... so it is another "brush" and technique to the artist in the end.
First giant problem was to determinate how the 3D shapes generated by genetic algorithms will have any sense of control, e.g, how they will be refraned of blocking the vision of the "camera", the point of view... I thought in less planes and more predefined geometric shapes (spheres, cylinders, cones, etc), placed, rotated, linked, coloured by GA's as a first attempt; so I decided to go over this route, and later allow the GA's to generate its own objects.
I thought "I can render later; I produce the math values, then I render later with some external engine"... but I saw another problem arising: do we program a new render engine to add to my own software, or do we use one already released under an open license? The option was obvious: I don't want to program if I can avoid it :-) so I considered 3D Studio and Rhino3D (privative, and not for GNU/Linux), Renderman (it confused me), Blender for GNU/Linux (good option, but not enough photorealistic to my taste), YafRay for GNU/Linux (I couldn't integrate it easily to my software), Xara Xtreme (source code too large) and PovRay for GNU/Linux, the latter was the selected one: source code available, 64-bit version and photorealistic enough, including the radiosity illumination algorithm. Cool.
So I called for a source refactoring, but instead, a port from object pascal to c++ was suggested, and later on, we ported the whole application to C++ (mostly they did); then we made some modifications for exporting to the .pov format of PovRay, and I started to play with the tool combination;
first result, too basic, the camera is misplaced and the light insufficient; nice, like a child taking its first picture! :-) :
It doesn't even looks like GA generated.
Second result, only the position and rotation of the shapes are decided by GA's, but under heavy control:
Third result, now we are talking; there were growing around the shapes on the second result. The GA's generated values were: position, rotation (not noticeable in spheres), texture of some of them. The fixed part was: the orange colour, and the soft refraction of the light over the blue texture, controlled by the radiosity algorithm:
Fourth result, exactly the same GA's generated form, but with textures, scenery fractally generated, green rubber and a mirroring texture. Changing the scenery values didn't took more than two minutes; rendering is the time consuming.
Fifth result; first forced resemblance, a sculpture or architectural design, the tile floor is made of reflective hexagons (what is it that I have with hexagons?!), the sky is weird fractalish, the oval crystal shape pending from the sculpture is transparent and reflects as glass; all was created by GA's, that includes the textures, weird in some parts, like in the shadows, clearer than the illuminated zones! :
(If you don't believe that genetic algorithms can produce very, very interesting and appealing architectures, you should know
some works of Celestino Soddu, from Milano, with his software Argenìa.)
From then on, some interesting results have appeared; those might include, this abstract nº 3, which has the sky but not the land, doh! (and I didn't kept the seed, was not paying attention):
And this abstract nº 8:
As you can see, the only section of the "creation" not driven by genetic algorithms are the backgrounds, created by fractal values (I'll keep working on this to take it to genetic algorithms too, but there are priorities), the only hand-made is the selection of the colours of each background, the colours of the lights in the PovRay renderer, and the camera position.
So this means the artwork is made (almost) completely by digital means, all in one combined software step. I select some colours to improve the visibility, or provoke a special effect, and watch for the resemblance, in case of being looking for it. 50%-50% work, as it was on 2D, the software makes its part, the artist selects its preference between the options or phenotypes given, that is what is called Interactive Genetic Algorithms (IGA, not HBGA), where the human evaluation mediates as the fitness function; neither the human nor the system can produce, by their own, the result they make together; this will be common, for a long time, in artistic expression regarding GA's. There are trends already to replace the artist fitness expression by artificial intelligence through combinatorial optimization problems. A lot of IA, and applied art theory, will be needed for the machine/system to actually recognize what can be visually appealing to humans, and to even generate more images over those learned criteria. Science evolves facing this kind of problems, will be great to see how it resolves that; for now, the better way to evaluate visual appealing (or IGA music), is putting an artist, or any person, to intercede in the GA process. Anyway, computer systems still can help a lot in solving the rutine of human evaluation in IGA's; e,g. to determine the frequencies from the most successful selected evolutions, then those can be kept as defaults values, "evolving the evolution", restructuring the humanly imposed limitations on the values of initialization, selection, mutation, and crossover.
Eventually I might move from IGA's to GA's completely, relinquishing my selector role to the computer system; at least is more possible on the 3D expression of GA's, where the human has a more active role selecting the frontiers of the evolution than the results itself (quite opposite to my role in the 2D image production) as far as I've noticed during this time. I can even dream with managing a neural network in the future; dreams are for free.
I'm considering opening a second blog for posting 3D genetic algorithms images and, may be, animations; but this implies, in a way, more research and development, more processing power, more programming, may be a rendering cluster too; so in the end, may be my self-funding work days are coming to an end, at least since including the 3D now; my actual university will hardly support this branch of personal work; so producing animations, improving the base software and going further with the 3D visuals will be a real pain in the ass, because of this tech/finantial limitations.
At least this forced me to improve my 2D visual IGA's software; now have it in C++ and also I can click and drag instead of entering so many math values, great! Unfortunately big part of the source code is privative, due to the work of many people over it: is the price of accepting interested collaboration.
So I guess I'll keep working while having and while solving this limitations; I never was, nor will be, a good programmer, but the pizza, warsteiner and coffee diet will go on for a while :-D nice memories from the past while moving to the future!
I leave you with the "pulpobot" ("roboctopus?), fixing some piece of hardware in the outer space. Enjoy.
Cheers,
Cristian.
[EDIT]: The works in 3D are being posted in this blog since December 2007, click on the name of the blog to see the latest ones.