Crowdsourcing is officially a “thing.” It exists regardless of personal opinion or scholarly objection. The consequences of crowdsourcing history threaten traditional historiography, which may or may not be a good thing. History has long been told through the highlight reel of time. Historians have typically studied the larger things of life, the high points, the low troughs, and tended to not mention most of the details. This approach certainly has its limitations. Many times the average life is overlooked or the exceptions are ignored, but overall the general flavor of things is communicated effectively. We have come upon an age though, where the smallest thing can be recorded digitally and then easily found by anyone with a connection to the internet. Crowdsourcing digital archives allows for rapid cataloging and dissemination of bits and pieces of the recorded world. The ease both of recording and cataloging such minutiae makes small-scale local history far more accessible and widespread. The pervasive quality of small-scale history seems like it would be more accurate for any specific, given area, but less applicable or important for the world at large. In a sense, the globalization of archival knowledge through crowdsourcing might cause the interesting effect of circumscribing the purview of any given historical work because of the sheer information on each small segment of time and place.
As with all things, crowdsourcing exhibits some positive and negative characteristics. On the negative side, those who contribute to history and cataloging it are often less qualified in crowdsourcing systems. In many cases contributors do not even need to register for an account, so there can be no accountability for whatever they may choose to do on the archive. Secondly, crowdsourcing allows a wide range of minds to work on a single project without any communication between members. Such a system does not bode well for a unified or systematically organized body of knowledge. Thirdly, crowdsourcing is unreliable at best in terms of interest and thus volunteer labor. It is never guaranteed that there will be progress or work each day, as crowdsourcing necessitates public interest and involvement. Positively though, crowdsourcing is a cheap way of garnering a theoretically limitless labor force. It also engages the public in historical work and gets people involved with important stories and events. Thirdly, crowdsourcing allows researchers to search through vast reaches of data that might never have been looked at otherwise.
The sites that I looked at were:
1. DIY History. This site allows users to transcribe handwritten letters from a variety of library holdings. The project began with only papers from the Civil War but has been expanded to include other collections as well. http://diyhistory.lib.uiowa.edu/index.php
2. Citizen Archivist. This site allows users to transcribe, tag, and contribute documents for the National Archives. Users can search for a topic they are interested in and deal with documents relating to that area. This site contains a huge array of sources dealing with anything related to U.S. national archival interests. http://www.archives.gov/citizen-archivist/
3. Brooklyn Museum. This site aims to make the Brooklyn museum more searchable online. To that end it allows users to tag or challenge tags on photographs of artwork. Users can earn points to watch reward videos and are ranked according to the number of tags they have contributed or censored. http://www.brooklynmuseum.org/opencollection/tag_game/start.php
4. MBDA. This site seeks to digitize and catalog the correspondence of Martha Berry, founder of the Berry Schools. Users can edit/catalog the documents through entering information about the document, tagging, and summarizing the contents of the document. Users can also earn ranks and badges the more that they edit. https://mbda.berry.edu/
The two websites I looked at most extensively were MBDA and the Brooklyn Museum. The MBDA website was fairly user-friendly and users had a options of how much responsibility they took on depending on how much work they wanted to do with each letter. The Brooklyn Museum site only afforded users two options: tagging or challenging tags. MBDA’s reward system seemed more complex but less immediately gratifying than the Brooklyn Museum’s more up front encouragement of beating other people’s rankings. At the Brooklyn Museum project it seemed like the user accomplished relatively little as most of the tags were so general as to render them relatively useless for researching purposes, while MBDA cataloging might allow more detailed searches. MBDA was ultimately less finished than the Brooklyn Museum project as it has fewer people involved and requires more commitment from its volunteers.