When the once-popular “reality” tv show American Idol first aired, it seemed the personification of the American Dream for the 21st century. From the vantage point of the television set, one could watch singers belt their way to the top through hard work, good looks, and bubbly personalities. However, American Idol was perhaps most famous for its first weeks of “open calls,” the portion of the show’s season where any American could audition for the competition. These weeks were the most memorable because most of the people attending the open calls could not sing worth a nickel, sending the populous of this great nation into peals of condescending laughter.
Crowdsourcing digital history is a lot like those first few episodes in American Idol’s seasons: People who have a genuine interest in history are overjoyed that they have the opportunity for an “open call” where they can show their amateur chops at decoding and a historical database. However, their attempts at this are often chock-full of errors because they have not been trained to interpret the subtleties of old documents. These well-meaning contributors inadvertently allow their biases and sloppiness as unpaid volunteers to taint these history projects. Nevertheless, just like open calls gave American Idol a lot of toll-free popular appeal, crowdsourced history projects harness the power of enthusiastic volunteers with very little financial cost involved.
One crowdsourcing project is Religion in American History, a site manned by Loyola University students. Their dilemma, similar to many in the crowdsourcing field, was that they had too many random documents in the school basement and not enough funds. Using their Flickr account, the students and their professors set out to pass some of their digitized library to helpers on the internet. Upon examination, the problem with Loyola University’s crowdsourcing project seems to be that not enough people are interested in helping out, and when they are, they write only the most cursory things. One comment for a picture that contained a full page of text merely named the heading of the text and nothing else.
Another site is a Flickr page curated by the University of Pennsylvania. Their work deals mostly with documents that are not American in origin but somehow made their way to the U.S. during the 17th-19th centuries (There’s an entire gallery devoted as an abecedarium). Called the Penn Provenance Project, there are tons of comments made by informed researchers. Everything is very well-organized (unlike Loyola’s Flickr) and well-photographed. This site seems to have flourished on crowdsourcing because it is obviously popular with scholars who have time on their hands. Contributing to these Flickr compilations is easy–all you need is a Yahoo account and you are set to comment to your heart’s content.
One non-success story of the digital crowdsourcing movement is that of heritagecrowd.org. Once a flourishing site, it was created “to encourage the crowdsourcing of local cultural heritage knowledge for a community that does not have particularly good internet access or penetration.” It was even set up so that people could contribute by text message and voicemail. However, in 2012, the site was hacked maliciously and destroyed. In a blog post about Heritage Crowd’s downfall, its founder Shawn Graham detailed why the site was made susceptible, citing poor record keeping, too much free reign given to computer systems, and security loopholes.
Lastly, the Martha Berry Digital Archives was probably the most professionally-operated crowdsourcing site I came upon. Boasting a great and organized layout for contribution, The MBDA site made it easy for contributors to tag, describe, and categorize material. In contrast to the above Flickr sites that I tried to contribute to, MBDA gave me instructions and a framework in which to operate. I would say that it was a superior site to most digitized history initiatives out there. However, I would still say that MBDA is just as susceptible to bad categorization as the Flickr sites are.
Crowdsourced digital history is appealing to the masses because of its inclusive nature and accessibility, but its chances at changing, adding to, and defining the scholarly historical field are slim. This is because the people contributing to crowdsourced history projects aren’t professional historians. Although there are of course some exceptions to this, the unrestricted, no-filter nature of digital crowdsourcing allows all types of faceless internet-surfers to place their stamp on the project at hand. Even though crowdsourced digital history might seem like an easy and cheap way to synthesize information, this “open call” makes historical documents vulnerable for butchering.