Interactive CPPNs in GLSL 2018
TL;DR Compositional Pattern Producing Networks (CPPNs) are learned functions mapping x,y coordinates to r,g,b colours, much like fragment shaders in the graphics pipeline. So, they can be easily implemented in GLSL and integrated into existing media arts pipelines.
Authors: Xavier Snelgrove and Matthew Tesfaldet
Poster at the 2018 NeurIPS Workshop on Machine Learning for Creativity and Design:
Try interactive demos in your browser
GLSL CPPNs can run in the browser using WebGL. Try a few examples here, hosted on ShaderToy.




Brief overview
There is a tradition of co-opting technological systems to create imagery in ways they were never designed for. Recent activation maximization approaches create images via neural networks designed and trained instead to classify images (these methods are the basis of DeepDream).
A very different community has been exploring the artistic possibilities inherent in shaders, programs
This work engages with two separate We are inspired by work, most recently summarized and built on in the excellent Differentiable Image Parametrizations article on distill.pub in generating images with differentiable parameterizations. This is compatible with activation maximization style approaches to image creation, where the parameters of the generator are optimized via gradient descent to trigger particular activations in a trained image classification neural network.
In particular the CPPN approach, in which rather than optimizing directly over pixels we optimize over the weights of a compositional network that directly maps x,y coordinates to r,g,b colours, allows for a smooth high-resolution generated image. By changing the basis functions and architecture of the CPPN we shape the inductive bias of the network (in this case, effectively the set of images that are “easy” to represent vs. “hard”), and thereby the aesthetic of the generated images.
CPPNs structurally match fragment shaders (or pixel shaders) in the classic computer graphics pipeline. This means that we can implement them in the OpenGL Shading Language (GLSL), and they become drop-in compatible with a large number of computer graphics and media arts systems, such as Unity, OpenFrameworks, TouchDesigner.
This works builds on an active tradition of using shaders as creative material, in ways quite distinct from their originally designed purpose. This community intersects with the demoscene, and shares work and techniques in places like ShaderToy (where the above interactive examples are hosted), and The Book of Shaders.
Citing this work
@inproceedings{SnelgroveTesfaldet2018,
author = {Snelgrove, Xavier and Tesfaldet, Matthew},
title = {Interactive CPPNs in GLSL},
booktitle = {Proceedings of the NeurIPS 2018 Workshop on Machine Learning for Creativity and Design},
year = {2018},
location = {Montreal},
url = {https://wxs.ca/research/cppn-to-glsl},
}