Art, AI and Google I/O 2016
At Google I/O 2016 Ramzi Rizk told me there's a field called Computational Aesthetics. It is so nascent that it doesn't even have a Wikipedia page. Computational Aesthetics is where computers can actually tell us if a picture is pretty or not. This is different from systems like Flickr's Interestingness because the system infers beauty from the image itself rather than from the social signals generated by people's interactions with the image. In fact there's already been research showing that social signals lead communities like Flickr to overlook hidden gems. This research raises the tantalising possibility that there may be large numbers of overlooked masterpieces hiding in the historical corpus of art.
I find this intersection of art and machine learning interesting because at scale it leads to the discovery of new tools and new perspectives on something that we consider to be uniquely human. For example the spatial visualisation used in the "Machine learning & art" talk at I/O lead me to t-SNE as a mechanism for building two-dimensional maps of high-dimensional spaces.
This raises the possibility of building pirate maps of information spaces or hypertexts and connecting that to Vannevar Bush's ideas about stigmergy in the Memex. Imagine being able to create and share your own trail through the space of all art? Or being able to reinvent the idea of the traditional slideshow of the family's holiday photos.
Today apps like Prisma merely help us make alternative versions of existing photos. At the same time apps like The Roll and Google Photos help us identify our best photos. In the future they might help photographers to create photos from scratch and tell new kinds of hypertextual stories.