Born of a larger visitor experience initiative undertaken by the Brooklyn Museum in 2013, ASK is part of the Bloomberg Connects digital engagement initiative funded by Bloomberg Philanthropies. The ASK app is the result of six months’ worth of pilot projects, working directly with visitors to determine their interests and needs. We ran the project using agile planning methodology, which relies on rapid-fire pilot projects to test, evaluate and iterate ideas. This allowed us to be nimble and to make changes based on what we were learning from our visitors. What the pilot tests showed us was that our visitors really wanted to talk to us about art – but we couldn’t staff our 560,000ft2 building with knowledgeable art historians who were on hand any time someone had a question. Technology, however, provides a scaling solution, and we are able to handle app traffic with a team of only six people.
When we developed ASK, our initial challenge was how to engage with people around art via text messaging. What would engagement even look like? If you’re a visitor then sure, getting your question answered is great, but we wanted to take it further. After quite a bit of testing and evaluation, our audience engagement team (the folks on the other side of the app) identified several engagement goals. These included prompting the visitor to look closer, to find a deeper understanding of the art object, find a personal connection with the object, or make connections to other works in the collection.2 It is quite possible to hit many, if not all, of these goals during a particularly in-depth conversation, though the team really aims for at least one during each exchange.
Many of those engagement goals are difficult to quantify, however, and measuring success is something my project partner Shelley Bernstein and I spent a good amount of time discussing. Ultimately we determined that three components help us determine the health of the ASK app: engagement goals, use rates, and the institutional knowledge we gain from the incoming data. We know from metrics that we hit the engagement goals pretty regularly and have from the outset. For example, we measure depth of conversation by number of exchanges between the user and the team. At the start of the project we averaged about twelve exchanges, we now average fourteen. Our app store reviews have been pretty stellar all along, averaging 4‒5 stars in both iTunes and Google Play stores. Users respond to the personal nature of conversation and the added value of the information the team provides.
Use rate has been a slightly different story. For the majority of this project, we fixated on use rate. After all, it’s easy to track and is a very clear measurement: just how many people use this thing? We calculate this by dividing the number of chats by number of visitors. From the soft launch, we saw a use rate of about 1%, and we automatically focused on the need to increase that. We felt we were offering such a great experience that all we had to do was figure
out the right way to explain it to people and they would naturally want to use it. So we spent almost two years working on increasing that use rate. After a great deal of testing and improved marketing efforts with insights from an outside evaluator, we managed to double our use rate from 1% to just over 2% consistently (when the stars align and our team is really on fire, we’ve even seen 4%). But this takes a lot of effort on our part: incentives and contests, staff hired specifically to promote the app, and marketing materials including palm cards and object labels. Our use rate is square in the middle of the average range for industry apps,3 so that’s pretty good. However, we’ve decided it’s time to turn our attention to the final measurement of success: institutional knowledge.
As I mentioned, we have had over 14,000 chats with ASK users since 2015. That’s a lot of conversations about art, and we’ve only barely scratched the surface of what we can learn. In early 2016, we provided curators with transcripts of questions and answers from their collection areas in preparation for a number of gallery reinstallations. Sharing these conversations with curators happens regularly as part of our ongoing review processes, but for this time period we were able to use them to focus specifically on how visitors were responding to the displays and works on view. And then our curators were able to make changes with this information. For example, we noticed that visitors were extremely distracted by a painted ceiling in the Egyptian galleries, so much so they were not paying attention to the important collection works installed just below. So as part of reinstallation, we painted over the ceiling to refocus people’s attention on the works of art. Still in the Egyptian collection, people often asked “what’s the story behind the broken noses on the statues?” While we did have an interpretive panel explaining this, people clearly weren’t always seeing it, and they wanted to go deeper into the story. As part of the reinstallation, we moved the panel explaining the phenomenon to the central gallery and Egyptian art curator, Ed Bleiberg, did additional research on the topic to provide the ASK team with a more nuanced answer. It proved so interesting that Ed is currently planning a small exhibition in the Egyptian galleries focusing on the broken noses.
Our initial analysis of the data also suggests that visitors tend to ask us about the largest works on display. This does seem to be a simple matter of what most grabs people’s attention, but it might be worth exploring this phenomenon more. What is so exciting is that the possibilities for what we can learn are quite extraordinary. We keep a running list of things that we’re investigating, including what kinds of questions people ask, and what that can tell us about how they are reacting, processing and understanding art. We are interested in how ASK fits into the gallery experience and overall interpretation options and whether behaviour differs between first-time and repeat users. It can also perhaps help us to make decisions about the amount and type of information offered about artworks and objects in the museum.
This is all more than we will be able to tackle, at least initially. Currently we are in discussion with our colleagues in the curatorial and education departments to learn what questions they have about visitor interests and behaviour in the galleries – we will compare lists to determine the scope of further research we undertake. We do know that we would like to complete a sentiment analysis of the conversations to determine how visitors feel about their experience overall as well as the artworks they chat about, and whether the sentiment changes over the course of the conversation. Wouldn’t it be interesting if a conversation began negative or neutral and ended positively – what role might the ASK team have played in that?
We believe this is data that will be of interest to researchers of all kinds: educators, art historians, and technologists. For example, we have already been approached by chatbot developers who are interested in mining our conversations to teach their bots how to sound more human. So once we have had our crack at the data, we hope to publish it somewhere for others to use as they see fit.
While the ASK app and the team behind it have offered fun and engaging ways for visitors to learn about art, the data and its possibilities are the real success and legacy of this project. ASK users are giving us vital insights that will allow us to improve the visitor experience for everyone. And that’s pretty powerful stuff.