WordSeer 2: Test users wanted
A new version of WordSeer is in the works.
It’s been guided by the advice of our long-suffering literature-scholar collaborators. And by the tales of frustration and trial-and-error of the students of the Hamlet class who tried to use WordSeer to analyze parts of the play. We also thought hard about the text analysis process as a series of steps. “What might Tanya Clement have been thinking and doing at each stage of her computational analysis of repetition in Gertrude Stein’s The Making of Americans“? ”What about when we analyzed language use differences in the descriptions of men and women in Shakespeare?” Out of this has come a better (we hope) understanding of the needs of scholars of text in the humanities.
We’ve completely rebuilt WordSeer. Instead of a traditional web application with a different visualization on each page, WordSeer now works more like an environment. Almost like a desktop — with windows and menu bars and persistent, useful, objects.
However, as researchers in Human-Computer Interaction, we know that we need to do user studies. First, we need to check whether we’re on the right track. Do our improvements make for a better experience than the old version? More importantly, we need more observations. To understand the humanities text analysis process, we want to observe more humanities text analysis.
Until now, the closest we’ve come to “user studies” is an iterative bouncing-around of ideas with just three scholars. They have been more like guides and expert consultants than “users” and they helped us sketch the first lines, and refine our first ideas into something that was actually useful.
We’ve acted upon the knowledge they helped us accumulate, the result of which is the completely redesigned WordSeer. We’re looking for a bigger set of users now, for a formal study. We’re hoping to find a set of around 15 professional literature scholars who will allow us to observe them as they use WordSeer to explore a problem of genuine professional interest to them.
So what text collection could possibly interest 15 different scholars in the digital humanities community enough to want to do a computationally-assisted analysis of it? And allow us to observe them at it?
In a rare moment of epiphany, we realized we could just ask you. So here’s a poll. It’s populated with some examples, but we encourage you to respond in the “other” field. Tell us: what collection, if set up with text analysis and visualization tools, would make you interested?