an open source toolkit that uses machine learning for art identification.


1. Take a photo
User takes a photo of the work from any angle within a museum’s app.

2. Our API identifies it
Image is sent to a server, which analyzes it against a relatively small collection of preexisting images of artwork. This method of analysis is called transfer learning, leveraging knowledge learned in one machine learning model in order to train another.

3. Data is served
Once the artwork is identified, a database of information on the artwork, the artist, and related works is sent back to the user’s phone.


Museum Vision opens up lots of possibilities for institutions to build on top of our API.

1. Accessability
For many visitors to museums, English is not their first language. Trying to learn more about the art can be a frustrating experience because most signage is almost exclusively in English.

Our API makes it easy to access information about the work in a user’s native language in an entirely nonverbal way.

2.  Self-guided tours
Photo-based art identification makes it easier to utilize existing institutional multimedia. Rather than being taken on a predetermined tour (or having to type in the name of a work), users can snap a photo and instantaneously pull up relevant text and audio, with a recommendation-engine for what to see next.

3.  Smart Bookmarking
Photos that users take for reference often get buried in their camera roll and never get looked at again. These photos become disconnected from the artwork information (artist name, year, description, etc) and are usually poor in quality. 

Our API could plug into a museum-created bookmarking system, making it easy to save photos for looking at later and giving users a reason to keep using the app after they leave the institution.


Museum Vision is currently under development. If you would like to sign up for a beta, please get in touch with us here: