In 2019, developer Neil Chatterjee and I developed a series of genetic sculptures. There were lots of projects around this time, and indeed there are still many today, focused on the creation of sculpture with 'Artificial Intelligence', or Artificial Neural Networks, or GANs, basically, sculpture entirely created by computers not with computers. Often the results would be about creating images that looked similar to sculptures, images that could pass as sculptures, created by networks trained on huge libraries of existing works, however, in this way, these sculptures were bland averages. They were good at making sculptures that you would never think twice about, that passed the test of sneaking in as real objects. We felt dissatisfied with these approaches and outcomes and tried to create our own Generative sculptures, which I'll explain in not-so-brief way below.
The process was inspired by GANs emerging at the time, or 'Generative Adversarial Networks'. These networks consisted of a generator, and a discriminator. As something is generated, perhaps like our sculpture, or a line of text, or an image, the discriminator would determine its 'fitness', giving it a rating or fitness score, "how real is this image", "Does this text make sense" etc. based upon the dataset, each time, the generator and the discriminator would learn from each other and get better, arriving at more accurate and 'fit' results. With sculptures, however, there is no such fitness rating.
What makes a good sculpture? People can make arguments for proportionality, for balance, harmony, but in the end these are almost entirely subjective. We can say what makes up a sculpture practically, "Material, Place, Surface, Edge, Texture, Colour, Scale, Mass, Centre of Gravity, Volume, Space, Movement, Light and Memory" according to Herbert George – but not what makes it good or even great. So we decided we would leave the discrimination to individual humans, but for generation we had to come up with a way to reliably generate sculptures, and to have their share traits per generation.
To achieve this, each sculpture started as a cube, and a sequence of four steps was applied. First, face selection, this was any of the 6 faces, second, extrusion, this is the process of pulling away from that face a new one, and with it a new volume, the third process was rotation, and the fourth scaling. This meant that the sculpture was made up of building blocks, similar to DNA, that could be interchanged with other sculptures - our version of ACGT, or FERS (Face selection, Extrusion, Rotation, Scaling). In code, it looked something like this - 3,4,48,34,.....
Via this methodology, each sculpture 'generation' was randomly created, 100 individuals at a time, which was then followed by a selection process. Once a maximum of 4 favourites were decided, the other 96 sculptures were deleted. These 4 favourites then were interbred, resulting in 100 new sculpture children. In the creation process was an element of mutation, a specific amount of random-ness that would inspire evolution, new traits between the sculptures. This process was then repeated as many times as the participant felt needed to arrive at their perfect sculpture - often this process tended towards homogeneity as participants had biases towards certain looks or forms from the sculpture.
In this way you are truly co-creating with the machine, not guiding it or shaping it, but encouraging it and dissuading it (dissuading is a nice way of saying culling the sculptures you don't like), your taste develops as does the code within the sculpture, arriving ever closer at an ideal.
Reflected in the material of each sculpture is a portrait of its co creator.