A Comparison of Black/White Land Ownership in 1850-1870 in Chester County, PA

I wanted to better understand landownership in Pennsylvania. I am conducting research on a small, rural Black community  that is currently known as Six Penny Creek (within French Creek State Park in Union Township, Berks County, PA). The piece of land upon which this community sits was purchased in 1842 by a Black husband and wife by the name of Jehu and Dinah Nixon. It was pretty clear to me that simply purchasing a piece of land was a substantial success for the pair, but I needed data to better demonstrate this. How likely was it for any Black family to purchase land? How did that compare to White families?

Luckily, there’s data available for this through IPUMS. I won’t go all the details about how I got it, etc. – suffice it to say that it is an amazing resource even if it is only accessible to researchers like me. Unfortunately, real estate values were only collected in 1850, 1860 and 1870. At this point, I am most interested in the settlement around the Civil War and so these dates work even if I would have liked to see data from both a little earlier and a little later. That said, these dates still provide important information.

The IPUMS data is coded and therefore amenable to analysis. I downloaded data for Chester County, Pennsylvania. Although Six Penny Creek is in Berks County, the people in the settlement have significant connections to people in Chester County where there are similar small, rural Black communities just across the Maryland border/ Mason-Dixon Line. I included variables such as race (in this case it is coded simply as Black or White), real estate value and relationship to the head of household. The latter is important because almost all land is listed with the head of household (even if it might be owned by a married couple or even a group of individuals). I say “almost all” because sometimes (approximately 10% of the time, regardless of race) someone else in the household is listed as owning real estate.

So, some simple statistics. During this time period, approximately 47-49% of White heads of household did NOT own land while roughly 73-78% of Black heads of household. That’s quite a large difference- more than 50% of White heads of households owned land while only 25% of Black heads of households own land. That alone is a massive difference, but I felt that this did not necessarily paint the full picture. So, I constructed a histogram of the value of the real estate by race and year.

A few notes about the graph below. First, it’s straight from the Pivot Table in Excel and, therefore, some of the labels are outside my control. For example this is divided up by year and race- you can see these labels on the right. Note that “1” is the IPUMS code for White and “2” for Black or African American. There are ways that I could have modified this, but these are not straight forward, but I also think it is important to preserve and consider these codes. What do that say about both historic and present racism? The horizontal axis is “bins” of real estate value in increments of $500. The vertical axis is the percentage of heads of households of that race who owned real estate valued within each “bin”.

Excel Pivot Table of IPUMS data- Real estate value by race and year.

The graph nicely demonstrates how real estate values differ by race. Black heads of household were much more likely to own $1000 or less of real estate (the red/orange and yellow lines). Approximately 80% (1850), 77% (1860), and 60% (1870) of all Black heads of household who owned land were in the lowest two “bins” of value ($1-500 and $501-1000). This is starkly different for White heads of household (blue, green and light blue)- this number hovers around 20% (25% in 1850, 21% in 1860 and 15% in 1870). That is, if Black families were able to acquire land, it was largely low value (likely in terms of both land and homes). While a few Black heads of household were able to own land with a value greater than $1000 these numbers are limited- between $1001-$2500 this is around 8-21% (8% in 1850, 16% in 1860 and 21% in 1870). This means that the majority of Black heads of household who were able to own land were restricted to the lowest valued real estate, but this is not true for White heads of household. In 1850, 97.87% of all Black land-owning heads of household owned less than $2500 worth of real estate, while only 46.62% of all White land-owning heads of household owned less than $2500 worth of real estate. This means that the wealthiest (which also tends to mean the most socially, economically and politically influential group) was almost exclusively white.  At the very highest “bin”- 10% of white land- owning heads of household owned real estate valued at more than $15,000, while 1% of Black land owners fell into this range. To get back to numbers, rather than percentages, this means that there were 810 white land-owners in this “bin” but only 3 Black land-owners in 1870.

As a side note, there is clearly some inflation here- the increasing number of land owners (of either race) in the higher bins may just be inflation, but it may also reflect social mobility- something that is hard to tell from this data.

To sum this all up- it was very difficult for Black individuals and families to acquire land. When they were successful, it often meant lower value land. However, even given this situation, some (3) were able to move into that highest “bin.” Although I would be very interested in these individuals, I would also suggest that being Black and owning any land, no matter how valuable, was quite a feat. On the flip side, this certainly does not mean that those Black folks who were unable to own land did anything wrong. Given the dire racism of the time (recall that Black men had the right to vote in PA up until the state constitution was rewritten in 1837/8), not owning land might simply have to do with not being in the right place at the right time.

We know that Jehu Nixon was a forgeman for the Potts family, well-known iron moguls and abolitionists. It is likely that he was able to purchase the land both because of his profession and because of his connection to a VERY wealthy family. Not all Black folks could claim such connections- indeed many were coming into the area directly from enslavement in the south. Jehu Nixon was born a free man in Pennsylvania (actually, he might have been born into slavery in PA, but I haven’t been able to prove that yet).

What is remarkable is that Jehu and Dinah, over the next few years, sold portions of their land to other Black families creating a small, rural Black community (at it’s peak it was around 10 houses and 45 people) well known for their connection to the Underground Railroad.

Why Use Geopackage?

Recently, I rediscovered Geopackage. It’s a long story, but I tried to use them a while ago and it didn’t work so well. But, when I tried to use QField the other day to collect some data as I have done for years, it had -as you may not be surprised to hear- updated automatically. So, some things had changed – mostly for the better and easily adapted to. One of these was that I wasn’t able to load my DEM. It’s a bit of a monster (c. 20 miles long and a couple miles wide with resolution at around one meter- about a 1GB) and I have always used TIFF format (or more precisely GeoTiff). But, QField was now forcing me to convert to GeoPackage. So I shrugged and did it – easily in QGIS, which is where the maps that I use in QField are built.

But, this may have changed my life!

Alright, maybe that’s being a bit dramatic. Here’s the thing. I want to share data and publish it openly. My first experience with this was a bit of a bear (see my data in the Journal of Open Archaeology Data). What’s the problem, you ask? First, the standard file format for vectors (point, line, polygon) is shapefile (Do I hear some “boo”s?). Anyone who has used shapefiles knows that, in order to share shapefiles you need to share a minimum of three files (and usually more). If you have ten layers you would like to share with collaborators this means you need to share around 50 files. That’s just ridiculous! Not only is it an unnecessary hassle that makes it very difficult to collaborate and to version. But, the solution to that particular problem is a GeoJSON file. This solves a number of problems associated with shape files (see these links- 1, 2, and 3– for more information on  other issues with shapefiles). But, this still means that I need to share separate files for each layer, so for five layers, that’s five files. Not too bad, right. By the way, one of the major benefits of a GeoJSON file is that all of the information is internal to the file. That means, for example, I can publish the file online and stream the data (in my case, using QGIS– if you want to try it, you can use my data on charcoal hearths in PA). So, the data can live in an online repository such as Zenodo or Open Context and I can visualize that same data in a GIS program (I recommend QGIS) along with any other layers that live locally. Because the data is stored in a repository, I can rely upon it being consistent and so can my collaborators.

But, that still means that each layer is a separate layer and you cannot use GeoJSON for rasters (that’s not totally true, but it certainly was not designed for it). So, what would work better? How about a file that holds all of your rasters and your vectors AND styles them. That’s what GeoPackage does. It’s actually a “container” for a SQLite database, where each layer is a separate table. Rasters are stored as JPEGs and PNG– JPEGS provide lossless compression and PNGs are used at the edges because they support transparency.

Imagine this. I complete an archaeological project that involves georeferenced historical data, original LiDAR data (e.g., as LAS files), derivatives from the LiDAR (such as DEM, hillshade, slope analysis, etc.), points collected in the field, various polygons (in my case, State Game Lands boundaries, Appalachian Trail boundary, etc.) and lines (historic and modern roads, etc.). I want to archive everything. The way I did this the last time, I archived each file separately. The only link they had was a description (see this) that discussed how each layer was derived and interconnected. But, they still live as distinct, if tenously connected, digital objects. However, Geopackage allows me to bundle all of this together- remember it is a database- into a single package (i.e. file). I can then archive that file and everything REMAINS connected. So much easier for me and for any present or future collaborators and so much better for digital preservation . If I do another project, I can either archive a new Geopackage file or, if is additional research using the same data,  version the old one (retaining all versions, of course).

Lastly, as I mentioned above, it is very important for me to be able to archive data in an online repository AND be able to stream that data to my workstation (in QGIS). I could do this with GeoJSON, so I am a big fan. However, I have not been able to figure out how to do this with GeoPackage, but I’m still investigating.

I would also like to be able to store the files online, stream those to my workstation AND visualize them on the web. There is one tool that seems to be able to do this with Geopackage (see this) that promises to do this. You can use this link to see some a test of some of my data (http://ngageoint.github.io/geopackage-js/?gpkg=http://ironallentownpa.org/Testsmay27a_4326.gpkg ).  Sometimes it does not load (I don’t know why), but even when it loads, it does not seem to support rasters, which is a big problem.

Anyone out there with any thoughts, suggestions or recommendations please comment below!

Juxtapose Test

This is a test of Juxtapose  by the Knight Lab at Northwestern University. The two images below show the present (Jan 2019) compared to an aerial photo from 1938. The furnace, casting house and the “dwelling house” have all been demolished, along with other local buildings. The original buildings were identified using a application for insurance  for the furnace complex (valued at $5500) from 1828.

A quick test of Harvard WorldMap

For a long time, I have been looking for a way to both collaborate and publish geospatial data and map. Harvard WorldMap may be the answer. It is certainly the best thing I have found so far. Although it is based upon GeoNode and you (perhaps with help) could get your own instance up and running, the key to Harvard WorldMap is that it also aggregates maps from other sources.

With Harvard WorldMap, users can upload layers- including vector (points, lines, polygons) and georeferenced raster (e.g., aerial photos or historic maps) layers. Formats are currently limited to shapefiles and GeoTiffs. Once uploaded, the user must add metadata. This is a very good thing and a vital step in the production and sharing of any type of data, but is often difficult or imperfect for geospatial data.

The user can manage who can view, edit and manage the layers. Until it is ready for sharing, the user can keep it private. If they want to collaborate with others, they can allow only those individuals view, edit or edit and manage permission.

Once added, layers can be downloaded in a number of useful formats (Zipped Shapefile, GML 2.0, GML 3.1.1, CSV, Excel, GeoJSON, GeoTIFF, JPEG, PDF, PNG, and KML). Layers can also be streamed to your desktop GIS program (you are using QGIS, right?) via Web Mapping Service (WMS). This means that, to make other layers in your desktop program you can have the same data as all of your collaborators streaming rather than from a file on your computer.

Layers can be aggregated into maps, for which access can also be restricted or not in the same way as layers. You can add your layers, but you can also use their search engine to find layers that are connected to Harvard WorldMap, such as maps from USGS or from ESRI. The selection is not yet amazing, but I was able to find a few maps for my work in Ecuador that I had not found elsewhere.

Vector layers can be styled by changing the marker shape, color, size and label.

This map can then be published. Here’s a test of some data collected by my students and I regarding charcoal production on the Blue Mountain in Pennsylvania. Take a look. Note that you can change the layers (both my uploaded layers and the basemap).

Creating DEM from PASDA las files

The following is a description of how the maps discussed in the previous post were constructed. This information is provided in the spirit of open access and replicability. The following is a step-by-step guide to building digital elevation models (and their derivatives) from PASDA LiDAR data.

  • Download las tiles from PASDA.
    • Go to PASDA Imagery Navigator: http://maps.psiee.psu.edu/ImageryNavigator/
    • Zoom in on the area of interest.
    • Under the Display Tile Index drop down menu, select “Lidar Hillshade”
      • This will show you the tile index and the relevant file names
    • Place your cursor over a spot of interest and right click.
      • This will bring up a list of available data.
      • Click on the “LiDAR,, Topo and DEM” tab
      • At the right, you will see a listing of “LAS” files for download.
    • Select and download all the appropriate files.
  • Convert projection and reserve only category “2” points (2= ground return).
      • Note that Pennsylvania Data MUST be converted from NAD83 PA S (feet) to NAD83 PA S (meters)
      • Open las2las.exe
        • In the upper left, find and select all of the files from the above.
          • Note that you can use the wildcard (.las, not .laz as is the default)
        • Keep only ground points
          • Expand the “filter” menu
          • Select “keep_classification” under “by classification or return”
          • Under “number or value”, enter 2
        • Reproject from feet to meters.
          • Under “target projection” select
            • State plane 83
            • PA_S
            • Be sure “units” are in meters.

Your GUI should look something like this:

forWP

  • Choose an output location in the upper right.
    • Click “Run” (in the lower left; you may have to minimize (click the “-“))
    • In the command line, you should see something like:
    • las2las -lof file_list.7808.txt -target_sp83 PA_S –olaz
  • You should now have reprojected las files that include only the ground return.
  • Convert las files into smaller “tiles”
    • Open “lastile.exe”
    • Add the reprojected las files (actually now they should be laz files) in the upper left.
    • Choose a tile size of 1000 (for the above this means 1000 meters)
      • Choose a buffer of 25 (you need a buffer and just need to experiment with what works best for you.)

Your GUI should look like this:

2lastile

    • Hit “Run”
    • The command line should look something like this:
      • lastile -lof file_list.1576.txt -o “tile.laz” -tile_size 1000 -buffer 25 -odir “C:\Users\Benjamin\Desktop\Working_LiDAR\Repoj_tile_las” –olaz
  • Convert tiles into DEM
    • Open “BLAST2DEM.exe”
    • Add the tiles constructed in previous section
    • Choose your output location
    • Choose “tif” for file format

Your GUI should look like this:

3blast2dem

    • Click “RUN”
    • Your command line should look like this:
      • blast2dem -lof file_list.6620.txt -elevation -odir “C:\Users\Benjamin\Desktop\Working_LiDAR\DEM_tiles” –otif
    • Your DEM’s are now created.

From here, you will want to stitch the DEM’s back together, but you need a GIS program for that. You can use the open source QGIS.

  • Open QGIS
  • Click on Raster- Miscellaneous- Merge.
  • Select the “choose input directory instead of files” box
  • Select the destination location and file name.
  • Click “OK”-
    • I frequently get an error here, but the results appear complete.

At this point, all of your data should be in a single Geotiff file (be sure to save it) as a digital elevation model.

In order to complete the analysis in the previous post, I converted the DEM into a slope model, which shows high slope in lighter gray and low slope in darker gray.

  • To do this, all you need to do is, in QGIS, use Raster- Terrain analysis- Slope. The input is your DEM and the output is the new slope model.
  • Within QGIS, you should now be able to see maps similar to those shown in the previous post.

Finding Charcoal- LasTools + PASDA LiDAR data= Amazing!

For a long time, I have been interested in charcoal production on the mountains around the Lehigh Valley, which I first learned about along the Lehigh Gap Nature Center‘s  Charcoal Trail. I had hiked this trail many times before I discovered what the name meant. Along the trail are flat areas (around 30 feet [10 meters] in diameter) upon which colliers (charcoal makers) piled large mounds of wood that they charred to produce charcoal. One of the primary uses of that charcoal was iron production. Indeed, the area around the Lehigh Gap Nature Center (ok, a bit farther west) was owned by the Balliet family who owned and operated two iron furnaces, one on each side of the Blue (Kittatiny) Mountain (one in Lehigh Furnace, Washington Township and another in East Penn Township; and likely a forge in East Penn).

I became interested, but was not truly fascinated until I found and perused PASDA’s Imagery Navigator. Within the Navigator, you can view DEMs (digital elevation models) created from a LiDAR survey from around 10 years ago. To put it too simply, to collect LiDAR a plane flies over an area shooting the ground with lasers. Since the location of the plane is known (through an amazing combination of GPS and IMU) and the speed of light is known, lasers bouncing back to the plane effectively measure the distance to a “return”, which is an object, such as the tree canopy, a trunk, a roof, or the ground. A DEM is then constructed from the LiDAR point cloud. I wondered if this data could show me flat areas on the sloped landscape (like those clearly visible along the LGNC’s Charcoal Trail). They could!

I used the “hillshade,” which is a view of the landscape created by applying a light source to the DEM (digital elevation model). It’s as if all of the vegetation was removed from the landscape and it was painted gray with a sun shining on it from the NW at about 45 degrees. This way, I was able to identify over 400 charcoal pits over an area of approximately 100 square kilometers. .

So… many years later, I am finally doing something with this. My students and I, as a part of a Field Archaeology class, are investigating charcoal pits and the people who used them. More on this  part of the project another time.

In the meantime, my GIS skills have dramatically increased and I was lucky enough to attend a workshop on LiDAR (funded by NSF and run by NEON and Open Topography; a special thanks to Kathy Harring, Muhlenberg’s Interim Provost). I was interested in LiDAR for a new project that I am working on, but as a part of the workshop, we were to do a small “project” based upon new understandings and skills developed over the three days. I choose to download some of the original LiDAR data (props to Pennsylvania for providing all of this online) and build my own digital elevation model. The idea was that I could tweak it in order to see the landscape better. So, I started off just trying to remake the DEM provided via PASDA; that would at least show that I had developed some skills. However, just trying to do this resulted in spectacular results that have changed the way I conceptualize the landscape and our project.

Most importantly, the resolution of my reconstructed DEM is much, much greater than that of the DEM provided by PASDA. It is clear why this is true (see this description), but it is not apparent as it should be when viewing (or downloading) the PASDA DEM. PASDA provides a DEM based upon points that are categorized as “8,” which are “Model Key (thinned-out ground points used to generate digital elevation models and contours)”, not those categorized as “2,” which are “ground” points. So, I was working with all of the ground points and the PASDA provided DEMs were based solely upon a subsample.

Here’s what I found:

Here’s an image of one section of the area under study. This is original data from PASDA. Note the “eye-shaped” flat spots.

lidarchardemorig

In the following image, I have marked all of the charcoal pits I could find with a blue dot.

lidarchardemorigpoints

This image shows the hillshade made from the DEM I built. Honestly, it doesn’t look terribly different from far above. Perhaps a bit more granular.

lidarchardemnew

However, here’s a zoomed in comparison of the area just NE of the point in the lower left corner in the image above:

Before:

lidarchardemorigsm

After:

lidarchardemnewsm

However, once I do a slope analysis, which shows flatter areas in darker gray and steeper areas in lighter gray, the charcoal pits (which are flat areas on a sloped landscape) literally leap out of the image.

lidarchardemnewslope

The image below shows all of the all of the newly identified charcoal pits with red triangles. A landscape that I once thought had minimal charcoal pits (I wondered why… and was developing possible hypotheses) now appears to have been quite densely packed with charcoal pits.

lidarchardemnewslopepts

Next post… details on exactly how I did the above. Hint- LasTools and QGIS.

Wanna collect data digitally?

(Note- originally posted here http://digitalarchaeology.msu.edu/wanna-collect-data-digitally/ on Sept 6, 2016)

This is my final post as a participant in the Institute for Digital Archaeology. This post serves three purposes. First, I announce a resource that I have created to enable digital data collection in archaeology. Second, I want to mention a few of my favorite aspects of the Institute. Finally, I just want to say a few thanks.

First, I announce a new resource for digital data collection in archaeology (see website ). While I initially planned to make something (I didn’t even really know what… an app?), instead I have cobbled together a couple of pre-built, “off-the-shelf” tools into a loose and compartmentalized system. And… because they are all well-supported open source tools they are also 100% free! On the website, I provide a justification for why I chose these tools, criteria for selection and descriptions of the tools. More importantly and even though all of these have low adoption thresholds (that was one of the criteria!), in order to support the testing, adoption and use of these tools in archaeology, I provide documentation on the ins and outs of using these tools. This means that you can be up and running in a matter of minutes (OK, maybe more depending upon download speeds…). In her final post Anne talks about toe-dipping and cannon-balling. My goal here was to suggest tools and provide assistance so that you either can dip your toes or jump right in; either way, I think you will see a big splash. I hope this helps. PLEASE LEAVE FEEDBACK. Please.

Second, I wanted to share two of my favorite aspects of the Institute. One, my colleagues. I have been honored to be part of such an open, collaborative and supportive cohort of insightful and dedicated scholars. I learned much simply from conversations over coffee at breakfast, Thai food at lunch and beers over dinner as I could hope to learn at any organized workshop or talk. Your struggles are as valuable to me as your final products. I want you all to know that I look forward to more conversations over beer, lunch (maybe Mexican this time?), and beer (did I write beer twice?). Two, time. I greatly appreciate the space that participating in this yearlong institution has given me. Without this institute, I think I would be still struggling away trying to put some sort of digital data collection system together in my “spare” time. No, it’s not done (is there such a thing), but the institute and the (dreaded) posts have kept me on track even though dead ends and unexpected turns.

Third, I want to thank the entire faculty. Of course, an especially large “THANK YOU” goes to Ethan and Lynne for putting the Institute together. I have learned so much from the rest of the faculty that I would like to thank them as well for their time and effort, both at the institute weeks at MSU as well as during the year in between. I understand the amorphous, complex, ugly (i.e. coding) world of digital archaeology much better than I ever thought I would. Thank you, Terry, Kathleen, Catherine, Brian, Shawn, Eric, Dan and Christine.

Lastly, a satisfied smile goes out to the NEH for supporting the Institute. Good decision! Amazing results! Just look.

Kobo Toolbox in the field- limitations? and solutions.

(Note: originally posted here: http://digitalarchaeology.msu.edu/kobo-toolbox-in-the-field-limitations-and-solutions/ on Aug 6, 2016)

This is a field report of efforts to develop a plan for low cost, digital data collection. Here’s what I have tried, what worked well, what did not and how those limitations were addressed.

First a description of the conditions. We live in two locations in Ecuador. The first is the field center established and currently run by Maria Masucci, Drew University. It has many of the conveniences needed for digital data collection, such as reliable electricity, surge protectors, etc. It does not have internet nor a strong cellular data signal. We are largely here only on weekends. During the week, we reside in rather cramped conditions in rented space in a much more remote location, where amenities (digital and otherwise) are minimal. There is limited cellular data signal (if you stand on the water tower, which is in the center of town and the highest point even though it is only one story tall, you can get a weak cellular data signal; enough for texts and receiving emails, but not enough for internet use or sending emails) and there is no other access to internet. We also take minimal electronic equipment into the field for the week (e.g. my laptop does not travel). So, everything needs to be set up prior to arrival in the field. The idea, therefore, is to largely use minimal electronic equipment in the field; I tried to use only one device (while also experimenting with others) for this reason. My device of choice (or honestly by default) is my iPhone 5s.

The central component of this attempt at digital data collection is Kobo Toolbox (see my earlier posts for more details… here, here, here and here), an open-source and web-browser based form creation, deployment and collection tool. Kobo Toolbox’s primary benefit is that, because it is browser-based, it is platform independent. You can use an iPad or an iPhone just as well as an Android device or a Mac or PC computer. This means that data can be collected on devices that are already owned or that can be bought cheaply (e.g., a lower level Android device v. an iPad). The form is created through their online tools and can create fairly elaborate forms with skip logic and validation criteria. Once the form is deployed and you have an internet connection, the user loads the form into a browser on your device. You need to save the link so that it can be used without a data connection. On my iPhone 5s, I simply saved the link to the home screen. A couple of quick caveats are important here. I was able to load the form onto an iPhone 4s (but only using Chrome, not Safari), but was unable to save it, so lost it once the phone was offline. I was unable to load the form at all on an iPhone 4 (even in Chrome). Therefore, although ideally the form should work in any browser, the reality is that it makes use of a number of HTML5 features that are not necessarily present in older browsers. Of course, as time goes on, phones and browsers will incorporate more HTML5 components and therefore, this will be less of an issue.

Once the form is deployed and saved on your device, you can collect data offline. When the device comes back online, it will synchronize the data you have collected with Kobo’s server (note that you can install Kobo Toolbox on a local server, but at your own risk). Then, you can download your data from their easy-to-use website.

For the first week, I set up a basic form that collected largely numerical, text and locational data. We were performing a basic survey and recording sites. Outside of our normal methods of recording sites and locations, I recorded sites with Kobo Toolbox in order to determine its efficacy under rather difficult “real-world” conditions. I collected data for 5 days and Kobo Toolbox worked like a dream. It easily stored the data offline and, once I had access to a data signal, all the queued data was quickly uploaded. I had to open the form for this to occur. I was unable to upload with a weak cellular data signal. It only completed uploaded once I had access to WiFi (late on Friday night). However, it synchronized nicely and I was able to then download the data (as a CSV file) and quickly pull it into QGIS.

The single biggest problem that I discovered in the field was that I needed to be able to see the locations of the sites recorded with Kobo Toolbox on a dynamic map. Although Kobo Toolbox recorded it nicely, you cannot see a point on the map, so I had to use another method to visualize what I was recording. The only way to see the recorded data is by downloading from the Kobo Toolbox, but a data connection is required. You can see and edit the data only if you submit as a draft. Once the data is submitted however, you cannot edit it in the field (this was true of other field collection systems that I have used, e.g. Filemaker Go). Yet, I still needed a way to visualize site locations (so I could determine distances, relationships to geographic features and other sites, etc. while in the field).

For this purpose I used iGIS, an free IOS app (see below for limitations; subscriptions allow additional options). Although this is an IOS app with no Android version, there are Android apps that function similarly. With this app, I was able to load my own data as shapefiles (created in QGIS) of topographic lines, previous sites and other vector data, as well as use a web-based background map (which seemed to work, even with very minimal data connection). Raster data is possible, but it needs to converted into tiles (the iGIS website suggests MapTiler, but this can also be done in QGIS). Although you can load data via multiple methods (e.g., wifi using Dropbox) I was able to quickly load the data using iTunes into the app. Once this data is in the app on the phone, an internet connection is no longer needed. As I collected data with Kobo Toolbox, I also collected a point with iGIS (with a label matching the label used in Kobo), so that I could see the relationship between sites and the environment. Importantly, I was also able to record polygons and lines, which you cannot do with Kobo Toolbox. Larger sites are better represented as polygons, rather than points (recognizing the c. 5-10m accuracy of the iPhone GPS). The collection of polygons is a bit trickier, but it works. Polygons and lines can later be exported as shape files and loaded into a GIS program. By using equivalent naming protocols between Kobo Toolbox and iGIS, one can ensure that the data from the two sources can be quickly and easily associated. The greatest benefit of iGIS is seeing the location of data points (and lines and polygons) in the field and being able to load custom maps (vector and raster) into the app and be able to view without a data connection. Although this is possible with paper maps (by printing custom maps, etc.), the ability to zoom in and out increases the value of this app greatly. Getting vector data in and out of iGIS is quite easy and straightforward. iGIS is limited in a couple of ways; nearly all of which are resolved with a subscription, which I avoided. Here’s a brief list of limitations:
– All points (even on different layers) appear exactly the same (same size, shape, color; fully editable with subscription). This can make it very difficult to distinguish a town from a site from a geographic location
– Like points, all lines and polygons appear the same (also remedied with a subscription). I was particularly difficult to tell the difference between loaded the many uploaded topolines and collected polygons.
– Limited editing capabilities (can edit location of points, but not nodes of lines; can edit selected data).
– Limited entry fields ( remedied with subscription, but, perhaps this is not necessary, if it can be connected to data collected with Kobo Toolbox).
– Unable to collect “tracks” as with a traditional GPS device (Edit- OK, so I was wrong about this! You can collect GPS tracks in iGIS, even though this is not as obvious as one might like).

The final limitation of iGIS was not something that was originally desired, but became incredibly useful in collecting survey data, especially negative results (positive results were recorded with the above). Our survey employed a “stratified opportunistic” strategy. We largely relied upon local knowledge and previous archaeological identification to locate sites, but also wanted to sample the highest peaks, mid-level areas and valley bottoms. In order to do this, we also used three different strategies. First, we utilized knowledgeable community members to take us to places they recognized as archaeological sites. Second, we followed selected paths (also chosen by local experts). Third, we chose a few points (especially in the higher peaks c. 200-300 meters above the valley floor). One of the most important aspects of this type of survey was recording our “tracks” so that we would know where we had traveled. This is commonly done with GPS units, but I was able to collect these using MotionX-GPS with the iPhone already in use. The GPS “tracks” (which are really just lines) as well as “waypoints” (i.e., points) were easily exported and loaded into QGIS. This allows for an easily collected data about where surveys traveled, but did not find archaeological sites. (Edit- Note that you can use iGIS for this function! MotionX GPS is not needed, therefore. It is great for recording mountain biking and hiking, however!).

One final comment will suffice here. I just discovered a new app that may be able to replace iGIS. QField is specifically designed to work with the open source GIS program QGIS. Although it is still new and definitely still in development, it promises to be an excellent open source solution for offline digital data collection- though limited to Android devices!

Crafting work flow- Kobo Toolbox/ PostGIS/ QGIS/ LibreOffice Base/ pgadminIII

(Note: Originally posted here: http://digitalarchaeology.msu.edu/crafting-work-flow-kobo-toolbox-postgis-qgis-libreoffice-base-pgadminiii/ on May 21, 2016)

Having largely decided on what tools to use (see previous posts: here, here and here), ironing out how this process will actually work has been a bit more difficult than hoped.

Two quick reminders. First, the goal of this project is to design (“stitch together” may a better term) tools for data collection and management (and eventually for archiving, etc.) that have relatively low adoption curves for most non-techie users. The primary audiences is for those on the “fringes” (though the fringes may be larger and perhaps more important than the “core”) of archaeology- those who have limited resources, such as graduate students, contingent faculty, faculty in small under-resourced schools, independent scholars, small contract firms, etc. Second, largely because of the first, all tools should be open access and the aim should be open access data (yes, perhaps with some- i.e., location- modified).

Kobo Toolbox and PostGIS form the essential core tools in this process. Kobo Toolbox is an easy to set up online/offline browser-based data collection tool. There really are no other OPEN ACCESS tools comparable to Kobo Toolbox (though there are numerous commercial tools). PostGIS is one (of many) spatial databases. I have chosen this largely because it is well-supported and widely used, but other database types would be useful as well.

Ok, down to the nitty-gritty. How to actually make this work!

First, we need to get data collected using Kobo Toolbox into the PostGIS database (because that is where the relational magic happens). This can be done through all three possible tools- QGIS, Libreoffice Base or pgadminIII. Determining which tool/method was the quickest, easiest and most accurate way took many, many hours of tinkering. I haven’t talked much about pgAdminIII, which is the GUI created to work with PostgreSQL databases and, therefore, will work best with PostgreSQL/ PostGIS data (though that doesn’t mean it is the best choice). QGIS and Libreoffice are designed to operate with a larger number of database types.

The key to understanding which tool is the most appropriate is remembering that your data is spatial. If you bring data into pgAdminIII or Libreoffice Base, they do not recognize what type of data is in each field (a.k.a. column). You have to specify the type for each column. In a large data table, this can be quite laborious, especially when using the PostGIS extension. However, QGIS is designed to work with spatial data. I found that importing recently-collected Kobo Toolbox data is best done through QGIS. Here’s how it’s done:

Once you have your PostGIS database up and running (I needed a friend to help do this, but once it is up and running, you are good to go), start a “New Project” in QGIS. Within QGIS, click on the “Add Delimited Text Layer” button (a). The following shows the resultant display completed:

b

Most importantly, note that, QGIS identifies the X and Y fields as the fields automatically labeled by Kobo Toolbox as “_Location_longitute” and “_Location_latitude.” If QGIS does not identify these columns as the geometry fields, you can do so with the drop down menu. Click “OK.” In the next box, you will need to identify a CRS (Coordinate Reference System). Kobo data is in WGS 84 (EPSG 4326), which is the most common CRS (if you need to you can transform your data to a new CRS later). Although it is not perfect, I encourage the use of WGS 84 because it can be deployed through the web more easily (e.g., via Google Maps, CartoBD, etc.).

Perhaps most importantly, QGIS also recognizes the format of many of the other columns. This is incredibly important because certain functions can be done with certain types of data (e.g. only numbers stored as numbers can be used in calculations; only data stored as text can be used in categorical labeling within a map; etc.). To see the format that QGIS identified for each column, right click the new layer and select Layers, then the Fields tab. You should see this:

c

Note many different “Types” of data- QString, int, double. If I bring this database into PostGIS via LibreOffice Base or pgAdminIII, I will need to specify the types. Of course, there are always problems with allowing software to automatically do nearly anything. In the above, “Students/student16” is identified as “QString,” but in reality this is a Boolean field (True or False). In this case, it was collected as a radio button in Kobo Toolbox and identifies whether or not this particular student was involved in the collection of each data point. This can be corrected later, but we do not want to do that yet.

The data is now in QGIS, but still lives in the CSV file. QGIS simply knows where to look to get data so that it can be mapped and the types of data so that they can be used appropriately (e.g., digits formatted as text cannot be used in a calculation).

We want this data in our PostGIS database so that it can be related to other data.

First, more the CSV data to PostGIS. This is relatively simple with DB Manager in QGIS. First, be sure to establish a connection with your database (see this link) Now that you have a connection, you can interact with your database. DB Manager is a tool to interact with spatial databases. DB Manager is a plugin that is now part of the core download. If it is not apparent, you can always install it as a plug-in. Click on Database–> DB Manager –> DB Manager at the top of the screen.

d

You will see:

e

Expand “PostGIS” by clicking on the +. You should see your database (if not you will need to establish a connection).

Your view should now look something like this:

f

With the layer from your imported CSV highlighted in the Layers Panel in QGIS, click on “Import Layer/ File” (g). Use settings similar to these:

h

Please note that you must identify the primary key as “_uuid” because this is a unique id assigned by Kobo Toolbox. Every table in a relational database must have a primary key that it uses to uniquely identify each record (row). You should not identify a column for geometry because there isn’t actually a column in the CSV file for this. QGIS will create it based upon the TWO columns you told it to use as X,Y coordinates and store it in a “geom” column.

Once you click OK, you should see a message that your data was successfully imported.

Although you have imported the data into your PostGIS database, it will not yet appear in your QGIS map. To do so, click on “Add PostGIS Layers” (i). In the subsequent screen, establish a connection (if not already established through the browser, you may need to click on “New”). Then select the newly imported file. Your screen should look similar to this:

j

Once you have selected the appropriate file, click on “Add”. This will add a new layer from your PostGIS database. It should look similar to this:

k

Note that, in the image above, the newly imported PostGIS data (in red)sits directly above the data in the CSV file (in green).

Finally, one should note that, the main difference between the CSV file and the table in the PostGIS database is that the data is defined by type.

However,  there are two additional components of a relational database that make this conversion important. First is the ability to establish relationships between tables (which you cannot do with CSV files). Second is the ability to update your data with new information.
Although there is no space (or time) for addressing both of these issues at this point, these are important to remember in the strategy.

In subsequent posts, I will address how to update your data with newly collected information and how to establish relationships. It should be noted that this can be done through the same three tools- LibreOffice Base, QGIS and pgAdminIII.

Why PostGIS?

(Note- Originally posted here: http://digitalarchaeology.msu.edu/why-postgis/ on April 8, 2016)

Benjamin Carter, Muhlenberg College

This will be a relatively quick post. As promised in this post, I will discuss PostGIS, a relational database management system.

First, what is a relational database and why should archaeologists use them (for a fuller explanation and discussion see Keller 2009 and Jones and Hurley 2011)? Of course, many archaeologists already use these (especially at larger contract archaeology firms), too many of us avoid them. Indeed, even in graduate school, I never discussed data organization (presumably this is common). However, the way that you organize your data can reduce time spent data wrangling and promote richer analysis. It also promotes and limits certain types of analysis.

Let’s contrast a relational database to the “flat” file (often in the form of an Excel spreadsheet) that is all too common in archaeology. Anyone who has used a spreadsheet knows that they are incredibly frustrating to use: Have you ever sorted by a column and then found that you didn’t highlight all of the columns. Now, one column is disconnected from its rightful data. No problem, right? That’s why there is an undo button. What if you accidentally saved it? No problem, you have that archived copy, right? Where was it?

Analyses of data in flat files are constrained by the contents of the spreadsheet. Even if you have multiple sheets in separate tabs (e.g., one for ceramics and one for lithics), they are not linked (yes, you can link through formulas, etc., but that is laborious as well). What if you need to input a new set of information? Let’s say you have a context code that includes site, unit and level, but you want to analyze by unit, you would need to create a new column and either manually enter the unit or digitally separate the unit from your context code. All of this takes time, creates poorly organized files that are difficult to reuse (frequently because data is disconnected from its metadata). Similarly, these frequently lack the appropriate metadata that allows them to be shared and archived. They are largely designed for and with the interests in mid of a single researcher (or perhaps a small team). Frequently, specialists have disparate spreadsheets that cannot “talk” to each other.

While no database is perfect, relational databases can alleviate many of these issues. The essential concepts behind a database are to disaggregate data, limit busy work and standardize your data (Note that this never means that you would lose the qualitative narrative). This reduces time and increases quality control. To conceptualize a relational database, think of multiple tables linked together. For example, I may have an excavation table with a wide array of data, including a column with site number. Each “record” (i.e., row) includes all the nitty-gritty information from a single layer from a single unit. If I use the trinomial system, there are three pieces of information buried in a single number/ column (state, county, site number). However, if I wanted to disaggregate these pieces of information in a spreadsheet, I would need to make new columns and do a great deal of copying and pasting all the while risking separating a piece of data from its original record. In a relational database, the original table can be easily connected to a small table that includes one column each for trinomial number, state, county and site number, but only ONE record for each unique trinomial. Then you create a “relationship” between the trinomial column in the original table and the trinomial column in the new table. In other words, each record (row in a table) of your original data is directly linked to state, county and site number with no insertion of columns or copying and pasting of data. Imagine your original table includes three sites in two counties and a total of 1000 records (levels of units). To associate state, county and site number with the trinomial, you would need to insert three columns and copy and paste data into the right cells for all 1000 records (that is, you have created 3000 additional pieces of data; I hope you didn’t waste field time writing your state on each form! With a relational database, you only need to create three records (12 pieces of data). However, because of the relationship created, you have actually created the same 3000 data points. Sounds a bit more efficient, no?

I recently worked with census data from North Atlantic Population Project . Much of the data is coded. The downloaded data includes numbers that mean nothing to me, but those codes can be linked to text; a 336 in the IND50US column (Industry categories in 1950) means “Blast furnaces, steel works, and rolling mills”. The original data table is linked to a small table (indeed many of them) that convert apparently meaningless codes into understandable text.  This means that I entered the words “Blast furnaces, steel works, and rolling mills” only once, but they are now associated with all 600 records in the original table from NAPP that included the “336” code in the IND50US column.

Why Post GIS? PostGIS is simply a spatial extension of PostgreSQL, an “object relational database management system.” That is, it is simply a language for creating and organizing relational databases. The main reason for choosing this system is that it is incredibly popular, widely used in industry and academia. It is open source and works on all computer platforms; it is now the native on Mac OSX servers. It can be stored a server or on your computer.  I prefer a graphical user interface and pgadmin, the “native” client for accessing and editing your database, is not intuitive to me. However, I am in the process of switching word processing and spreadsheets to the open source LibreOffice Suite. LibreOffice Base, their answer to MS Access/ Filmaker Pro has native support for PostgreSQL. Other database management programs, such as the two mentioned in the previous sentence, also have native support for PostgreSQL (i.e., you do not need to use LibreOffice).  Similarly, PostgreSQL/ PostGIS is supported by GRASS/QGIS,  an open source GIS programs (This is a huge plus. Most data in GIS programs are in the flat files ridiculed above). While PostgreSQL/ PostGIS is certainly not the only option available to do these things, it appeared to be the most widely supported.

Finally, I will openly admit that I have only begun to work with PostGIS/ LibreOffice Base and I am having some difficulties. I will refrain from being too critical yet because it may simply be part of the learning curve.

Kobo Toolbox ( a field data collection web app discussed here) yields tables that

To open a can of worms that I am still struggling with, I will suggest that relational databases will allow field data to be easily converted into (or perhaps collected as) linked open data.