JSON-LD for Cemetery Data

April 12, 2018 - All

BACKGROUND: This project’s naissance was in the bottom shelves of a local history & genealogy department of a middling public library in Maine. Resting on those shelves are cemetery maps, names and dates-data one could employ to find the resting place of an ancestor, or fodder for any number of studies and stories-cloistered in binders, on paper. It continued to develop as I looked for electronic sources to populate the data model I was developing. While recent years have seen genealogical records become increasingly available and connected, the same cannot be said for the myriad, disparate sites that house cemetery records. My wish was to find a way to make all of this data available, sharable, and reusable. I would like it to be able to ingest digital information, as I have done with data harvested from BillionGraves.com, but also allow for manual entry for the analog records. Ultimately the JSON-LD model best met both needs-it is human and machine readable, and the @context keyword transforms static data into semantic information that can be queried and reused. PROCESS: To say this developed in fits and starts is an understatement. While the overarching plan was clear to me, it was difficult to find the proper starting place. Cemetery data is housed in many silos, all with different structures and strategies. I began by querying the BillionGraves API, and while I was able to pull data from some other sites, the data from BillionGraves was the best structured (JSON) and cleanest (relatively speaking).

› tags: culture / programming / python /