Surprising image pairs and its resulting meanings are generated automatically based on an algorithm using machine vision. Each published pair consists of an image from a current article on the Guardian and a cultural artifact from the collection of the Metropolitan Museum. The script first downloads a photo from the article, uses Google Cloud Vision to assign it information about what it captures, and then queries the metropolitan museum’s database for the most similar work of art.
This creates a pair from which the post is created. The meanings created by this juxtaposition are largely formed by chance and in some cases may be problematic. After all, many cultural artifacts also have a turbulent history and refer to acts that would be unacceptable today. Thus, some contributions also open up space for their critical reflection.