- This project relies on pattern recognition in images by Neural Nets, ultimately developed by Google Deep Dream, although I use a derivative of this called Pikazo. Having mostly been influenced by “modern art” (now around 100 years old), I am interested in where fine-art photography might be going in future, and I think that “augmented intelligence” (neural nets) and “augmented reality” (which I have only just started looking at) might be one future avenue.
- With Pikazo one feeds the Neural Net two images, one representing a “style”; the Net finds patterns in both images and develops a new image, modifying the patterns in the main image with those in the style image. The new image is recognisably derived from the inputs (because similar processes to those used by the brain generate patterns the brain recognises – the whole process is the reverse of Google's attempts to automatically index pictures based on the patterns found in them).
- I then upsize the image and process it in Lightroom as I would a RAW image from my camera. I have full control over the input images and the final result.
- I present typical images here, with a “triptych” showing the inputs and raw output for a couple of them.
- All pictures are resized to suit Zenfolio. Full size TIF images are available on request, but achieving ARPS shows that my technique is adequate; it is more appropriate to suit the image size etc to its context.
© David Norfolk and David Rhys Enterprises Ltd