3D rendering of velocity anomalies

Lately I’ve been working on using the yt library (https://yt-project.org) for 3D visualization of seismic data sets. Seismologists generally avoid 3D plots in lieu of 2D slices of 3D data as it’s often hard to interpret 3D structure. Much of that difficultly comes from highlighting iso-surfaces, but yt is different in that it uses ray-tracing to project through a data set, integrating information as it goes. The result is a 3D rendering that maintains accumulated structure.

Here’s an example of a 3D rendering of negative velocity anomalies in the Western U.S. from 50-1200 km deep, using the shear wave data from James et al. 2011, one of the 3D datasets available via IRIS (https://ds.iris.edu/ds/products/emc-nwus11-s/):

The highlighted structures represent regions in the Earth’s mantle that exhibit slower seismic shear wave speeds when compared to a reference model and the faint grid in the middle of the model domain is at 410 km, the upper limit of the mantle transition zone. The white dot on the surface is Yellowstone.

This is a preliminary image, so I won’t dive into interpretation of the structure embedded here, so for now, just enjoy this mesmerizing GIF:

Bikes! Part 0

Some months ago I discovered that there are a number of bike shares out there that make their data publicly available, and I’ve been meaning to download some of it and poke around. Well today I finally had some time. And though I’m not sure what I’ll be doing with this data set yet, I wanted to share a figure I made in my initial exploration.

The following figure shows the number of rides per day and the median ride distance for the Portland OR bike share (data from BikeTown: https://www.biketownpdx.com/system-data). I threw the code in a new github repo here so you can take a look if inerested, but I’m not going to go into detail yet (the code just downloads their system data files, concantenates the monthly files and does some minimal processing and plotting). In any case, the figure:

The neat (and maybe unsurprising) feature is the strong seasonality to bike share usage, both in terms of just the number of rides per day (high in summer, low in winter) and the median distance of each ride (longer rides in summer). There is an interesting spike in total rides around May 2018 — maybe excitement for springtime? or additional bikes added to the program? A plot of bike usage percent (total rides / available bikes) might be more illustrative.

So that’s that for now. Hopefully won’t be too long before I have time for some more in depth analysis.

Bikes!

 

A Python tool for inspecting shapefiles

In my recent coding exploits, I’ve downloaded lots of different shapefiles. Most shapefiles were accompanied by nice .xml documenation with information about the data and how its stored or labeled, but a few had hardly any information at all. I knew the general content based on the description from the website were I downloaded the shapefile, but I didn’t know what they had used for the record labels and I didn’t know what the record values were exactly. So the past couple days I sat down and wrote a bit of code to help in unraveling a myserious shapefile…

Check out (and/or download) the full Python source here: shapefile inspection!

The program is fairly straightforward. It traverses the records of a shapefile, recording the record label (or “field names” as I refer to them in the source) and information about each record. One of the program’s methods uses the Python XML API called ElementTree to  produce an xml file that you can load in a browser. Here’s a screen shot from using Firefox to view the xml file produced when running the program on the Open Street Map shapefile that I extracted via MapZen for my previous post.

xml_sample_1

In a browser, you can shrink or expand the xml attributes to get some basic information about each record: the name or label of the records, the data type and some sort of sample of the data. If the record data is an integer or float, then the sample will be the min/max values of the record while if it’s a string, it will either be a list of the unique strings in the records or just a sample of some of the strings. The OpenStreetMap shapefile contained some record values that were keywords, like the “highway” attribute in the screen shot above. While other records were strings with unique values for each shape, like the “name” attribute below:

xml_sample_2

In addition to generating an xml file, the program allows you to interactively explore a field.

When you run the program from command line (type in python inspect_shapefile.py in the src directory), it’ll ask for your input. It first asks if you want to give it a shapefile, here I said no and used the shapefile hardwired into __main__ of inspect_shapefile.py:

Do you want to enter the path to a shapefile? (Y/N) N
 
Using shapefile specified in __main__ :
directory: ../../learning_shapefiles/shapefiles/denver_maps/grouped_by_geometry_type/
filename: ex_QMDJXT8DzmqNh6eFiNkAuESyDNCX_osm_line

 Loading shapefile ...
... shapefile loaded! 

It then pulls out all the fields in the shapefile records, displays them and asks what you want to do. This is what it looks like using the OpenStreetMaps shapefile:

Shapefile has the following field names
['osm_id', 'access', 'aerialway', 'aeroway', 'amenity', 'area', 'barrier', 'bicycle', 
'brand', 'bridge', 'boundary', 'building', 'covered', 'culvert', 'cutting', 'disused', 
'embankment', 'foot', 'harbour', 'highway', 'historic', 'horse', 'junction', 'landuse', 
'layer', 'leisure', 'lock', 'man_made', 'military', 'motorcar', 'name', 'natural', 
'oneway', 'operator', 'population', 'power', 'place', 'railway', 'ref', 'religion', 
'route', 'service', 'shop', 'sport', 'surface', 'toll', 'tourism', 'tower:type', 
'tracktype', 'tunnel', 'water', 'waterway', 'wetland', 'width', 'wood', 'z_order', 
'way_area', 'tags'] 

Do you want to investigate single field (single)? Generate xml 
file (xml)? Or both (both)? single

Enter field name to investigate: landuse

So you can see all these different fields. I chose to look at a single field (“landuse”) and the program will then look at the “landuse” record value for each shape, record its data type and save new record values:

searching for non-empty entry for landuse ...
data type found: str
Finding unique record values for landuse
1 of 212550 shapes ( 0.0 % )
 new record value: 
93 of 212550 shapes ( 0.04 % )
 new record value: reservoir
6782 of 212550 shapes ( 3.19 % )
 new record value: residential
110432 of 212550 shapes ( 51.95 % )
 new record value: grass
111094 of 212550 shapes ( 52.26 % )
 new record value: construction
Completed field name inspection 

---------------------------------------
Shapefile has the following field names
['osm_id', 'access', 'aerialway', 'aeroway', 'amenity', 'area', 
'barrier', 'bicycle', 'brand', 'bridge', 'boundary', 'building', 
'covered', 'culvert', 'cutting', 'disused', 'embankment', 'foot', 
'harbour', 'highway', 'historic', 'horse', 'junction', 'landuse', 
'layer', 'leisure', 'lock', 'man_made', 'military', 'motorcar', 
'name', 'natural', 'oneway', 'operator', 'population', 'power', 
'place', 'railway', 'ref', 'religion', 'route', 'service', 'shop', 
'sport', 'surface', 'toll', 'tourism', 'tower:type', 'tracktype', 
'tunnel', 'water', 'waterway', 'wetland', 'width', 'wood', 'z_order', 
'way_area', 'tags']

The field name landuse is str
and has 5 unique values
Display Values? (Y/N) Y
 possible values:
['', 'reservoir', 'residential', 'grass', 'construction']

As you can see from the output, there were 4 keywords (reservoir, residential, grass and construction) used to describe the ‘landuse’ field. So I could now write some code to go into a shapefile and extract only the shapes that have a ‘residential’ value for ‘landuse.’ But I couldn’t do that until I (1) knew that the landuse field existed and (2) knew the different definitions for landuse type.

So there it is! That’s the program. Hopefully all the shapefiles you ever download will be well-documented. But if you find one that’s not and you really need to figure it out, this little tool might help!

Some code notes and tips

The xml file that I create didn’t follow any particular standard or convention, just what I thought might be useful. Perhaps that could be improved?

REMEMBER THAT IN PYTHON, YOU NEED TO EXPLICITLY COPY LISTS! I stupidly forgot that when you make a list

list_a = list()
list_a.append('blah')
list_a.append('d')

And then want to make a copy of the list, if you do this:

list_b = list_a

Then any changes to list_b will change list_a. But if you do

list_b = list_a[:]

You’ll get a new copy that won’t reference back to list_a. This is probably one of the things that I forget most frequently with Python lists. Palm-smack-to-forehead. 

The XML API ElementTree was pretty great to work with. You can very easily define a hierarchy that will produce a nice xml tree (see this example). I did, however, have some trouble parsing the direct output from the type() function. When you calculate a type,

type(0.01)

you get this:

<type 'float'>

When I gave it directly to ElementTree (imported as ET here), like this:

ET.SubElement(attr, "attrtype",name="data type").text = type(0.01)

I would get some errors because of the quotation marks enclosed. To get around this, I converted the type output to a string, split it up by the quotes and took the index that would just be the type (int, str, or float):

ET.SubElement(attr, "attrtype",name="data type").text = str(type(0.01)).split("'")[1]

Mapping some things!

Since publishing my series of posts on manipulating shapefiles in Python (1, 2 and 3), I’ve been exploring different open data catalogs so I thought I’d share some of the maps I’ve mapped! All these were produced using scripts in my learning_shapefiles repository on GitHub (see denver_stack.py and denver_tree_canopy.py).

Downtown Denver

denver_downtown

MapZen is a pretty sweet service! You can draw a regional box then the folks at MapZen will take that box and extract all the OpenStreetMap data within the box then give you the shapefiles with all that info. First thing I (stupidly) did after downloading my data extraction from MapZen was to just plot all of the shapes… and I then proceeded to sit around for quite some time while my laptop chugged away and produced a highly detailed map of all the things in Denver County. I zoomed in to downtown Denver for the above image to show off the detail.

Metro-Denver

denver_road_rivers

A more abstract representation of the primary roadways in metro Denver. I figured out how the MapZen/OpenStreetMap shapefile was organized and only plotted motorways, primary and secondary roads (bright white in the map). I also created a grid containing the shortest distance to a roadway (using  the distance method available for shapely.geometry.LineString) and contoured the inverse distance (1/x) to evoke the topographic contours along rivers.

Tree Cover in Denver County

denver_1

Denver’s Open Data Catalog has a bunch of databases with shapefiles galore. I downloaded one for the Tree Canopy and then plotted up the percent tree cover in each geometry. This is what actually lead me to learn how to plot roadways… and here I overlaid the tree cover on the same MapZen extraction of OpenStreeMap data. Along with the roadways underneath, it forms a sort of abstract tree.

So that’s it for today. Three maps from a couple different open data sources using some inefficient Python. Not going to go into detail on the code this time, because, well, it’s slooow. I’m using modifications of the simple scripts that I used in my shapefile tutorials because I was excited to just plot things! But there are much better ways to handle shapefiles with hundreds of thousands of geometries. So until next time (after I figure out how to use fiona or geopandas), just enjoy the visuals!

Shapefiles in Python: shapely polygons

In my last post, I described how to take a shapefile and plot the outlines of the geometries in the shapefile. But the power of shapefiles is in the records (the data) associated with each shape. One common way of presenting shapefile data is to plot the shapefile geometry as polygons that are colored by some value of data. So as a prelude to doing just that, this post will cover how to plot polygons using the shapely and descartes libraries. As always, my code is up on my github page.

The two python libraries that I’ll be using are shapely (for constructing a polygon) and descartes (for adding a polygon to a plot). So step 0 is to go install those! I’ll also be using the numpy and matplotlib libraries, but you probably already have those.

Though the documentation for shapely has some nice sample source code, I wrote my own script, simple_polygons.py, to get to know the libraries better. In this approach, there are two steps to building a polygon from scratch: constructing the points that define the polygon’s shape and then mapping those points into a polygon structure. The first step doesn’t require any special functions, just standard numpy. The second step uses the  shapely.geometry.Polygon class to build a polygon from a list of coordinates.

There are limitations for valid polygons, but virtually any shape can be constructed, like the following pacman:

pacman

The first step is to build the list of coordinates defining the exterior points (the outer circle) and a list of interior points to exclude from the polygon (the eyeball). Starting with the exterior points, I calculate the x and y coordinates of unit circle from 0.25pi to 7/4pi (0 to 2pi would map a whole circle rather than a pacman):

theta = np.linspace(0.25*3.14,1.75*3.14,80) 

# add random perturbation 
max_rough=0.05 
pert=max_rough * np.random.rand(len(theta)) 

x = np.cos(theta)+pert 
y = np.sin(theta)+pert

I also add a random, small perturbation to each x-y position to add a bit of roughness to the outer pacman edge, because I wanted some small scale roughness more similar to the shapefiles I’d be plotting later. Next, I build a python list of all those x-y points. This list, ext, is the list of exterior points that I’ll give to shapely:

# build the list of points 
ext = list() 

# loop over x,y, add each point to list 
for itheta in range(len(theta)): 
    ext.append((x[itheta],y[itheta])) 

ext.append((0,0)) # add 0 point

At the end, I add the 0,0 point, otherwise the start and end points on the circle would connect to each other and I’d get a pacman that was punched in the face:

pacman_punch

That takes care of the exterior points, and making the list of interior points is similar. This list, inter, will be a list of points that define interior geometries to exclude from the polygon:

# build eyeball interior points 
theta=np.linspace(0,2*3.14,30) 
x = 0.1*np.cos(theta)+0.2 
y = 0.1*np.sin(theta)+0.7 

inter = list() 
for itheta in range(len(theta)): 
    inter.append((x[itheta],y[itheta])) 
inter.append((x[0],y[0]))

Now that we have the list of exterior and interior points, you just give that to shapely’s polygon function (shapely.geometry.Polygon):

polygon = Polygon(ext,[inter[::-1]])

Two things about passing Polygon the interior list: (1) you can actually pass Polygon a list of lists to define multiple areas to exclude from the polygon, so you have to add the brackets around inter and (2) I haven’t quite figured out the [::-1] that the shapely documentation includes. I know that generally, [::-1] will take all the elements of a list and reverse them, but why does Polygon need the points in reverse? No idea. Without it, I only get an outer edge defining the eyeball:

pacman_badeye

I would love to get some information on why Polygon needs the reversed list, so leave me a note in the comments if you know why.

Regardless, the next step is to add that polygon structure to a plot, with a straightforward use of matplotlib.pyplot (imported as plt) and descartes.patch.PolygonPatch:

 

# initialize figure and axes 
fig = plt.figure() 
ax = fig.add_axes((0.1,0.1,0.8,0.8)) 

# put the patch on the plot 
patch = PolygonPatch(polygon, facecolor=[0,0,0.5], edgecolor=[1,1,1], alpha=1.0) 
ax.add_patch(patch) 

# new axes 
plt.xlim([-1.5, 1.5]) 
plt.ylim([-1.5,1.5]) 
ax.set_aspect(1) 

plt.show()

PolygonPatch’s arguments are pretty self explanatory: facecolor and edgecolor set the colors for the fill and edge of the polygon. Conveniently, facecolor and edgecolor can be specified as RGB values, which I’ll take advantage of for plotting shapefile records in my next post. It can also accept any of the kwargs available to matplotlib.patches.Polygon class (like the transparency,alpha, between 0 and 1).

So that’s it! Pretty easy! And in some ways it is even easier to plot polygons from a shapefile, since pyshp imports shapefile coordinates as a list and you can just give that list directly to Polygon… more on that in the next post.

Taxi Tracers: Mapping NYC

The city of New York has been recording and releasing trip data for every single taxi ride in the city since 2009 (TLC Trip Data, NYC) and these publicly available datasets are treasure trove of information. Every single time a NYC taxi picks up a passenger, 19 different pieces of information are recorded, including gps coordinates of pickup and dropoff, total fare and distance traveled. Think about how many cabs you see while walking the streets of New York and then imagine how many times in a day just one of those cabs picks up a passenger… yeah, that’s a lot of data. A single data file from  TLC Trip Data, NYC contains data for one month between 2009-2016 and weighs in at a  MASSIVE 2 GB.

There have already been plenty of cool visualizations and analyses of this data set, Todd  W. Schneider processed the ENTIRE data set from 2009-2015, amounting to over a billion taxi trips (link) and found lots of interesting trends. His comparison of yellow and green taxis to Uber rides in individual boroughs (link) is particularly interesting. Chris Wong created a visualization that follows a randomly selected cab through a day of driving (link) and Juan Francisco mapped over 10,000 cabs in a 24 hour period to create a mesmerizing video (link). Both used the start and end points from the TLC Trip data and created a best guess route via the Google Maps API. It’s pretty fun to watch the taxis jump around the city, especially if you happen to have just read Rawi Hage’s novel Carnival, in which the main character, a cab driver, describes one type of cabbie as “the flies”, opportunistic wanderers floating the streets waiting for passengers and getting stuck in their own web of routine.

In my own exploration of the data, I’ve been following two main threads. The first, which I’ll describe today, is map-making using gps coordinates of the taxis. The second, which I’ll leave to a future post, involves a more statistical analysis of the data focused on understanding average taxi speed. I’ll start by describing the final product, then dig into the details of my coding approach (python source code is available on my github page here).

Taxi Maps (the final products)!

Just look at the map, it’s beautiful:

taxi_count_map_maxval_labeled

This map was made by creating a rectangular grid spanning New York City and counting how many pickups occurred within each grid bin (more details on this below) within three months. The color scale is the base-10 log of the number of pickups in each bin (yellow/white are a lot of pickups, red are a lower number and black are no pickups). The horizontal direction (longitudinal) is divided into 600 bins, while the vertical (latitudinal) is divided into 800 bins, a grid fine enough to resolve individual streets.

The level of detail visible in this map is impressive. My eye was initially drawn to the odd half-circle patterns in the East Village, which it turns out is Stuyvesant Village. It’s also odd that the bridges to Brooklyn and Queens show up at all. I’m pretty sure it’s impossible for cabs to pick up anyone on one of these bridges. Perhaps the presence of the bridges arises from uncertainty in the GPS coordinates or cabbies starting their trip meter after getting on the bridge? Hard to say. But it’s kinda neat! The other very clear observation is the drop in cab pickups in Manhattan around the north end of Central Park. The NYC Taxi and Limo Commission observed this same feature and calculated that 95% of pickups in Manhattan occurred south of this boundary, which led to the creation of the green Boro taxis in order to provide more equitable access to taxis (more info here and here).

In addition to simply mapping taxi pickup location, some interesting trends are visible by changing the color scale to reflect different variables. The following map takes the same pickup locations, but colors each grid point by the average distance traveled away from the pickup location:

trip_distance_vs_lat

The map is fairly uniform in color, except for around Wall Street and the southern tip of Manhattan, where the more yellow coloration shows a generally higher trip distance. It’s hard to pick off numbers from the color scale, so I took the map and created a modified zonal average; at each latitude, I averaged across the longitudinal grid points, only using grid points with non-zero values, and then plotted the result (above, right). The line plot shows pretty clearly that cabs picking up passengers around Wall Street tend to go about 2 miles farther than trips originating anywhere else. Why is this? Hard to say without a lot more analysis, but 4 to 4.5 miles happens to be about the distance up to Penn Station or Grand Central Station, so I suspect commuters who don’t mind spending extra money to avoid the subway on their way to the train.

In the following I’ll dive into the horrid detail of how I made the above maps, read on if you dare! Future posts will look into some more complicated analysis of the yellow cab data, beyond simple visualization, so stay tuned for that…

Data Description

The datafiles at TLC Trip Data, NYC are HUGE comma separated value (csv) plain text files, each weighing in at about 2 GB. A single file covers all entries for a single month and contains 19 entries per line. A single line in the csv file looks like:

1,2015-03-06 08:02:34,2015-03-06 08:14:45,1,3.80,-73.889595031738281,40.747310638427734,1,N,-73.861953735351563,40.768550872802734,2,14,0,0.5,0,0,0.3,14.8

The TLC yellow cab data dictionary, describes what each entry is, but the ones I find most interesting are: pickup time (entry 2), dropoff time (entry 3), passenger count (entry 4), trip distance (entry 5), pickup and drop off longitude and latitude coordinates (entries 5, 6, 9 and 10, respectively), total fare (entry 12) and tip (entry 15).  So in most basic terms, all I’ve done is read each line of the file, pull out the entry of interest along with its gps coordinates and then find a way to plot that aggregated data.

Source Code, Tricks and Tips

The source code (available here) is divided into two python modules: taxi_main.py and taxi_plotmod.py. The first deals with reading in the raw .csv files while the second contains routines for processing and plotting that data. The source includes several a README and several scripts that show examples of using modules, so I won’t go into that here. Rather, I’ll describe my general approach along with several problems I ran into (and their solutions).

Reading the data

Reading in a single csv data file is relatively straightforward. The standard open command (open(filename, ‘r’)) allows you to read in one line at a time. The functions  of interest are in taxi_main.py: read_taxi_file loops over all of the csv files in a specified directory and for each file calls read_all_variables. I did add some fancy indexing in the form of a list of strings that identify which of the 19 variables you want to save (see the Vars_To_Import variable) so that you don’t have to import all the data. Check out the main function in taxi_main.py to see an example of using the commands to read in files.

One  of the design decisions I made early on was to read in the data then store it in memory before processing it. Meaning there is a big array (Vars or VarBig in taxi_main.py) that stores all of the imported data before feeding it to the routines for spatially binning the data (see below). This seriously limits the scalability of the present code:  for the 3 months of data used in the above maps, I was using up 30-45% of my laptop’s memory while reading and processing the data. Including more months of data would use up the remaining RAM pretty quick. The way around this problem would be to write some routines to do that spatial binning or other processing on the fly, so that the whole data set wouldn’t need to be stored. The advantage of holding the data set in memory, however, is that I can easily process and re-process that data multiple times without the bottle neck of reading in the files. So I stuck with the present design rather than rewriting everything…

Spatially binning point data

To create spatial maps and images, I had to go from point data to a gridded data product. This is a common practice in geo-located data sets (data sets that have latitude, longitude and some associated measurement at those coordinates) and though there are many complicated ways to carry out this process, the concept is simple:

gridded_product_concept

Conceptual cartoon of how to make a gridded data product from distributed point data

Each data point has a latitude, longitude and associated measurement (the color of the dots in the case of the conceptual image above) and the data points are spatially scattered according to their coordinates. To make a gridded data product, I first specified a latitude/longitude grid (dashed lines, left). I then find which points lie within each bin, aggregate the measurements within a bin and then store that new single value in the bin. In this conceptual case, I’ve averaged the color of the data points to define the new gridded data product. In the NYC taxi maps shown above, the color is first determined by simply the total number of measurements within a bin and then the average of all the trip distance measurements within a bin.

Since the reading of the raw data file and subsequent spatial binning can take a few minutes, the code can write out the gridded data products (see the main function in taxi_main.py). These files are only a few MB in size, depending on the resolution of the grid, so it’s much easier to repeatedly load the gridded data products rather than re-processing the dataset when fine tuning the plotting.

pyTip: where. One of the useful python tricks I learned in this is the numpy routine where that identifies the indeces of an array that meet a specified criteria. Say you have a giant array X with values between 0 and 100. If you want to find the mean of X, but only using values less than 50, you could use np.mean(X[np.where(X<50)]).

pyTip: looping efficiency. Less of a python tip and more of a computational tip, but there are several ways to write a routine that spatially bins point data. The first approach that I stupidly employed was to loop over each bin in the spatial grid, find the points that fall within those bins and then aggregate them. If N is the total number of data points, Nx is the number of bins in the x dimension and Ny is the number of bins in the y dimension, then in psuedo-code, this looks like

 for ix in 1:Nx
     for iy in 1:Ny
 
      Current_Grid_Values = All data points within Lat_Lot[ix,iy]
      GridValue[iy,ix]=mean(Current_Grid_Values)

Why is this approach so bad? Well, it involves searching through the entire data set of N values each time through the loop. N is huge (the smallest cases I ran have N > 50,000, for the maps above, N>1000000), so for a fine grid like in the maps above (where Nx = 600, Ny=800), this is incredibly slow.

A much better way to do it is to loop over N, find the corresponding bin and aggregate the measurements within that bin (again, this is psuedo-code, check out the real source code, map_proc within taxi_plotmod.py to see the working python code):

 for iN in 1:N
      Measurement = all_data_points[iN]
      Current_Lat_Lon = Lat_Lot[iN]
      find ix,iy that contains the Current_Lat_Lon 
      GridValue[iy,ix]=GridValue[iy,ix] + Measurement
      GridCount[iy,ix]=GridCount[iy,ix] + 1

After the loop, we can find the mean value of measurements within each binned value by normalizing by the total number of measurements within each bin:

 GridValueMean=GridValue / GridCount

A simple switch, but it scales to large values of N, Nx and Ny much better. The above maps took only a few minutes to process using the second (good) approach, while when I initially tried using the first (stupid) approach, the code was still running when I killed it after 30 minutes.

Plotting the spatially binned data: The plotting makes use of the matplotlib module, specifically the pcolormesh function. I went through a lot of color maps and ended up settling on ‘hot.’ Gives the maps an almost electric feel.  Not much else to say about the plotting, pretty standard stuff. Check out plt_map within taxi_plotmod.py if you want the detail.

Stay tuned for more exploration of the NYC taxi data!

Animating Historical Urban Population Growth

In their recent article,  “Spatializing 6 ,000 years of global urbanization from 3700 BC to AD 2000”,  Reba et al. describe efforts to create a digital, geocoded dataset tracking the distribution of urban locations since 3700 BC. The digital database provides a record of human movement over the geologically recent past and is useful for understanding the forces that drive human societies towards urbanization. The database seemed like a fun test of my fledgling python skills and in the post here, I’ll describe a visualization of Reba et al.’s database. Reba et al. released their data here. My python source is available vi GitHub here. As this is my first actual post on the blog here, let me remind you that I’m something of a python noob. So if you actually check out the full source code, I expect you’ll find lots of mistakes and perhaps some poor choices. You’re welcome to let me know of all the ways I could improve the code in the comments section below.

Before jumping into what I actually did, a bit more on the dataset. Reba et al. (2016) created their new digital database from two earlier compilations, Chandler (1984)  and Modelski (2000,2003). The authors of those previous studies meticulously scoured archaelogical and historical records in search of locations of urban centers. Reba et al. took those datasets and created a .csv file listing each city’s name, latitude, longitude, population estimate and a time corresponding to the population estimate, a process that involved transcribing the original databases manually (ouch!! none of the available automated print-to-digital methods worked) and geocoding each entry to obtain a latitude and longitude. In the end, Reba et al. ended up with three digital .csv datasets: Chandler’s database for urban centers from 2250 BC to 1975 AD, Modelski’s ancient city database covering 3700 BC to 1000 AD and Modelski’s modern 2000 AD database. All told, there are just over 2000 unique cities recorded between the three databases, many of which have multiple population estimates through time.

It’s worth noting that the historical database is by no means complete. As Reba et al. discuss in detail, omissions of urban centers from the original Chandler and Modelski databases, unclear entries  in the original databases or errors in transcription would all result in missing cities. South Asian, South American, North American, and African cities are particularly under-represented. As a geologist, I’m used to incomplete records. Interpreting a fossil record, a regional sedimentary sequence or structural juxtaposition often requires some interpolation. A given rock unit may be preserved in one location while it is eroded and lost to knowledge in another. Thus, the completeness of any (pre)historical dataset depends on both preservation and sampling – there could be cities missing because the local climate did not preserve abandoned structures or simply because archaeology is a relatively young pursuit and excavation efforts have traditionally focused on a small fraction of the Earth’s surface. But as Reba et al. state “These data alone are not accurate global representations of all population values through time. Rather, it highlights the population values of important global cities during important time periods.”

I was pretty impressed by Reba et al.’s work and decided their dataset would provide an interesting context to improve my python. So I set out to write some python code capable of importing the database and making some pretty plots (source code here). Note that I do not distribute Reba et al.’s database with my source, you’ll have to download that separately from their site here.  See the README file in the source code for a list of other dependencies required to run the code, which was only tested with python 2.7.

Before digging too deeply into the code, let’s just start with the end product. Here’s an animation of Chandler’s data set from 2250 BC to present day. Each circle is an urban center and the color of circles changes from blue to red as the time approaches present day.

Pretty neat!  We can see clustering and expansion of early urban centers around Mesopotamia, then seemingly separate loci of urban development popping up in East Asia and Central America. Along South America, it’s interesting how urban centers trace out the coastline. And as the animation approaches present day, the number of urban centers explodes (from 1960 to 2014, the percentage of the world’s population living in urban settings increased from 34% to 54%).

In addition to animations, the python source can plot a single frame for a user-specified time range. Here are the entries for Modelski’s Ancient Cities database from 500 BC to 50 AD:

HistoricalCities_Modelski

The Code

The three main steps to producing the above animation were (1) import the data, (2) subsample the data and (3) plot the data. The module urbanmap.py includes all functions needed to reproduce the above figures. And the scripts ex_animate.py and ex_single_frame.py are examples that call the appropriate functions to create the above animation and single frame plot.

In the following, I’ll walk through the different steps and their related functions in urbanmap.py. Again, full source code is here.

(1) Importing the data

The first step is to actually read in some data! The function urbanmap.load_cities does just that for a specified dataset. The directory where the dataset is located is specified and the name of the dataset are given by the data_dir and city_file argument, respectively:

 39 def load_cities(data_dir,city_file):
 40     """ loads population, lat/lon and time arrays for historical 
 41         city data. """

The function works with any of the three original plain text, comma separated valued (CSV) files from Reba et al.: chandlerV2.csv, modelskiAncientV2.csv and modelskiModernV2.csv.

Each .csv file has a row for each city, where the columns are the City Name, Alternate City Name, Country, Latitude,Longitude, Certainty and Population. So first, I open up the .csv file and create a csv object using the CSV reader:

 44 #   load city files
 45     flnm=data_dir+city_file
 46     fle = open(flnm, 'r') # open the file for reading
 47     csv_ob=csv.reader(fle,skipinitialspace=True)  

Some of the Alternate City Names have commas within quotes, which causes those entries to split. Adding the second argument (skipinitialspace=True) to csv.reader prevents those commas within quotes from being read as a new comma-separated value.

The remainder of the function reformats the data into arrays that I find more flexible to work with. First, I generate an array called Time, which contains every time for which a population record exists. In the original .csv files, the header line of each Population column gives the time at which the population estimate corresponds to. The header values are strings, such as BC_1400, BC_1390,…,AD_100,AD_110,AD_1000…  So the first thing I do is convert these header strings to a 1D numpy array where each element of the array is the time in years before present (ybp).

 57 #   get header line 
 58     header = fle.next()
 59     header = header.rstrip()
 60     header = header.split(',')
 61 
 62 #   build the Time array
 63     nt = len(header)
 64     Time_string=header[6:nt]
 65     nt = len(Time_string)
 66     Time = np.zeros((nt,1))
 67 
 68 #   convert BC/AD to years before present 
 69     for it in range(0,nt):
 70         ct = Time_string[it]
 71         ct = ct.split('_')
 72         Time[it]=bc_ad_ce_to_ybp(ct[1],ct[0])

Why go through all this trouble? Well, let’s say I want all cities with a recorded population for the time range 1400 BC to 50 AD. If I knew the header values exactly, I could find the indeces in Time_string corresponding to BC_1400 and AD_50. But the headers aren’t uniform within a single .csv file or across .csv files. The way I’ve constructed the Time array, however, allows for straightforward conditional indexing. The usefulness becomes more apparent after reading in the remaining data and I describe it in section 2 below.

The next lines (lines 74-101 of urbanmap.py) are pretty straightforward. Each row of the database is read in and distributed to one of three arrays: PopuL, city_lat and city_lon. The latter two contain the latitude and longitude of every city. PopuL is a 2D matrix with a row for each city and a column for each population record (i.e., PopuL.shape() returns n_city by n_Times).

I did run into some trouble with blank space characters. A few lines of the .csv files have some non-ascii blank space characters ‘\xa0’ that resulted in errors when I tried to convert the entries into floating point values. So I had to replace those characters with a true blank space before converting:

 81     for row in csv_ob:
 82 #       read in current line
 83         line = row
 84         line = [item.replace('\xa0',' ') for item in line]

 87 #       pull out lat/lon    
 88         city_lat[indx] = float(line[3])
 89         city_lon[indx] = float(line[4]) 

And that’s about it for reading in the data…

(2) Subsampling the data

Now that I’ve got all the data loaded, I want to be able to subsample that data for a specified time range. The main function to do this in urbanmap.py is get_lon_lat_at_t:

177 def get_lon_lat_at_t(year_range,city_lon,city_lat,Time,PopuL):

Most of the arguments (city_lon,city_lat,Time,PopuL) are returned by the load_cities function, described above. The year_range argument is a string argument that specifies the time span for which I want to select cities with non-zero population records. I chose to make year_range a comma separated string:

year_range='5000,BCE,1900,CE'

This year range starts at 5000 BCE and ends at 1900 CE. The time unit can be BCE,CE,BC or AD. Within get_lon_lat_at_t, I first convert this year_range to a start and end date in ybp:

186 #   convert year_range to years before present
187     year_range=year_range.replace(" ", "")
188     years=year_range.split(',')
189     time_range=[0,0]
190     time_range[0]=bc_ad_ce_to_ybp(years[0],years[1])
191     time_range[1]=bc_ad_ce_to_ybp(years[2],years[3])

Now, I can easily select the cities within the desired time range without knowing beforehand whether or not the chosen endpoints exist exactly in the Time array. First, I loop through each city and select the population records of the current city

193 #   find lat and lon of cities with recorded populations in database
194     for icit in range(0,ncit):
195         pop=PopuL[icit,:] # current population

Next, I find the times in current city that have a nonzero population record

196         pop_t=Time[pop>0] # times with nonzero pops    

And now I pull out times that are within the specified time range

197         pop_t=pop_t[pop_t<=time_range[0]] # pops within time range
198         pop_t=pop_t[pop_t>=time_range[1]]

The final step is check if there are any cities left. If there are no cities with a nonzero population record in the specified time range, I flag them for removal:

200         if pop_t.size == 0: # flag for removal 
201            lons[icit]=999.
202            lats[icit]=999.

So at the end of all this, I select the lons and lats that are not equal to 999 and those are the longitudes of the cities with a nonzero population within the specified time range. Neat-o! I can now return these lon/lat values and make some plots!

(3) Plotting the data

Now that we’ve got a function to subsample the data, we can plot that data in a number of ways. The simplest place to start is to put all the lat/lon of cities with a nonzero population record for a specified time range on a map. This is what the __main__ function of urbanmap.py and the script ex_single_frame.py accomplish. In both, I sequentially call load_cities and get_lon_lat_at_t functions then plotting the resulting lat/lon values using the basemap toolkit (mpl_toolkits).

The plotting is accomplished in two functions: urbanmap.base_plot() and urbanmap.city_plots(). The first creates a basemap object with the background image of the globe while the second actually puts the current lat/lon values onto that background image. base_plot() follows this example pretty closely.

The following, from ex_single_frame.py, shows the full sequence to plot cities within a current time span.

 33 import urbanmap as um
 34 import matplotlib.pyplot as plt
 35 
 36 # select time range
 37 time_span='500,BCE,50,CE'
 38        # comma separated string noting desired time span. The age descriptor can 
 39        # be BC, BCE, AD or CE.
 40 
 41 # select data set
 42 data_dir='../data_refs/'
 43 city_file='modelskiAncientV2.csv'
 49 
 50 # import data set 
 51 (Time,PopuL,city_lat,city_lon)=um.load_cities(data_dir,city_file)
 52 
 53 # get lon/lat of cities in time span    
 54 lons,lats,time_range=um.get_lon_lat_at_t(time_span,city_lon,city_lat,Time,PopuL)
 55 
 56 # plot it
 57 plt.figure(facecolor=(1,1,1))
 58 m = um.base_plot() # create base plot and map object
 59 um.add_annotations() # adds references
 60 um.city_plot(lons,lats,m,'singleframe',time_range,Time) # plot points
 61 plt.show() # and display the plot

Now that we have functions to pick out cities within a time range and then plot those points, creating an animation is conceptually straightforward. I just needed to repeatedly call get_lon_lat_at_t and city_plot, varying the time range each call. However in practice, sorting through the animation routines in the python animation package was the trickiest part of this whole exercise. I quickly gave up on using the animation routines, and simply looped over a time range, subsampling and plotting at each increment, saving the figure at each step along the way. I was then left with a bunch of image files (the frames of the animation), which I then concatenated into an animation using a bash script and ImageStack.

In the end, I managed to figure out the python animation routines, and that’s I included in the source code.

PyTip: shifting the basemap. I used two functions to rotate the center longitude of the map:  mpl_toolkits.basemap.Basemap() and  mpl_toolkits.basemap.shiftgrid(). The Basemap function creates the longitude/latitude projection while shiftgrid rotates the topography data to align with the Basemap. BOTH functions take an argument lon0, but in Basemap, lon0 is defined as the center longitude of the projection while in shiftgrid lon0 is the westernmost longitude. I was tripped up by this for a while because I assumed lon0 had the same meaning in each… whoops.

PyTip: accessing the data for animation. Most of the tutorials for the animation function (matplotlib.animation.FuncAnimation) are relatively simple and are set up to re-calculate the data to plot at each frame. The issue I ran into was that FuncAnimation animates a specified function by sequentially feeding it a frame index. I couldn’t figure out how to pass additional arguments (the full dataset) and importing the data set at each frame would be way too slow. I had an existing dataset that I wanted to read in only once at the start of the program. I first got around this by declaring the database variables (PopuL, city_lats, city_lons,…) as global variables so that they’d be accessible within the FuncAnimation call. This was pretty easy but I’m always a little uncomfortable using global variables. My approach in the end relied on simply better understanding how python handles variables. By nesting all of the animation functions under one top level function, any variables set in that top level function are available at the lower levels (in a sense, they’re locally global?). I found this post useful.

References:

Reba, Meredith, Femke Reitsma, and Karen C. Seto. “Spatializing 6,000 years of global urbanization from 3700 BC to AD 2000.” Scientific data 3:160034 doi: 10.1038/sdata.2016.34 (2016).
Historical Urban Population Growth Data: http://urban.yale.edu/data
Chandler, Tertius. “Four thousand years of urban growth: an historical census.” The Edwin Mellen Press (1987).
Modelski G. World Cities: -3000 to 2000 (FAROS 2000, 2003).