Google Arts & Culture

lab212
How, through contemporary technologies, can we reinterview the masterpieces of art history?
lab212

lab212

lab212

lab212

lab212

T-SNE Map

With t-SNE Map, you can explore an interactive 3D landscape created by Machine Learning algorithms that organised thousands artworks by visual similarity. The more similar two artworks, the closer they are. The algorithms only “looked” at the artworks. No meta data was used, the visual similarity was calculated with a computer image algorithm used in Google Search purely based on the images. We then applied the t-SNE algorithm, usually used to debug static images, to create an interactive virtual space that you can navigate, and look at artworks from any angle and at scale.

Free Fall

Mathematical formulas are used to place the artworks in a 3D environment, where you can choose to visualise what a cultural big bang might look like, or travel through the sea of artworks decade by decade. Artworks are animated in realtime in the web browser using WebGL and with resolutions up to 1024px, made possible by a mix of data pre-processing and “level-of-detail” mechanisms. This allows you to interact with the artworks and identify patterns at scale.

Curator Table

Use the Curator’s table to discover new insights and connections between artworks. Inspired by curators around the world, we applied the principle of laying out prints on a table when planning an exhibition, to our virtual gallery. Assets are animated in realtime. You can search objects, styles and artists, and view them in one 3D space.

Tags

Using an algorithm the machine looked at the artworks and used thousands of tags, without the intervention of humans. The keywords were generated by an algorithm also used in Google Photos, which analyzed the artworks by looking at the images without any metadata. The algorithm used over 4,000 unique tags like “morning” and “happy”. The experiment explores if Machine Learning could help people browse artworks similarly to how they search on the web.