A Python tool for inspecting shapefiles

In my recent coding exploits, I’ve downloaded lots of different shapefiles. Most shapefiles were accompanied by nice .xml documenation with information about the data and how its stored or labeled, but a few had hardly any information at all. I knew the general content based on the description from the website were I downloaded the shapefile, but I didn’t know what they had used for the record labels and I didn’t know what the record values were exactly. So the past couple days I sat down and wrote a bit of code to help in unraveling a myserious shapefile…

Check out (and/or download) the full Python source here: shapefile inspection!

The program is fairly straightforward. It traverses the records of a shapefile, recording the record label (or “field names” as I refer to them in the source) and information about each record. One of the program’s methods uses the Python XML API called ElementTree to  produce an xml file that you can load in a browser. Here’s a screen shot from using Firefox to view the xml file produced when running the program on the Open Street Map shapefile that I extracted via MapZen for my previous post.

xml_sample_1

In a browser, you can shrink or expand the xml attributes to get some basic information about each record: the name or label of the records, the data type and some sort of sample of the data. If the record data is an integer or float, then the sample will be the min/max values of the record while if it’s a string, it will either be a list of the unique strings in the records or just a sample of some of the strings. The OpenStreetMap shapefile contained some record values that were keywords, like the “highway” attribute in the screen shot above. While other records were strings with unique values for each shape, like the “name” attribute below:

xml_sample_2

In addition to generating an xml file, the program allows you to interactively explore a field.

When you run the program from command line (type in python inspect_shapefile.py in the src directory), it’ll ask for your input. It first asks if you want to give it a shapefile, here I said no and used the shapefile hardwired into __main__ of inspect_shapefile.py:

Do you want to enter the path to a shapefile? (Y/N) N
 
Using shapefile specified in __main__ :
directory: ../../learning_shapefiles/shapefiles/denver_maps/grouped_by_geometry_type/
filename: ex_QMDJXT8DzmqNh6eFiNkAuESyDNCX_osm_line

 Loading shapefile ...
... shapefile loaded! 

It then pulls out all the fields in the shapefile records, displays them and asks what you want to do. This is what it looks like using the OpenStreetMaps shapefile:

Shapefile has the following field names
['osm_id', 'access', 'aerialway', 'aeroway', 'amenity', 'area', 'barrier', 'bicycle', 
'brand', 'bridge', 'boundary', 'building', 'covered', 'culvert', 'cutting', 'disused', 
'embankment', 'foot', 'harbour', 'highway', 'historic', 'horse', 'junction', 'landuse', 
'layer', 'leisure', 'lock', 'man_made', 'military', 'motorcar', 'name', 'natural', 
'oneway', 'operator', 'population', 'power', 'place', 'railway', 'ref', 'religion', 
'route', 'service', 'shop', 'sport', 'surface', 'toll', 'tourism', 'tower:type', 
'tracktype', 'tunnel', 'water', 'waterway', 'wetland', 'width', 'wood', 'z_order', 
'way_area', 'tags'] 

Do you want to investigate single field (single)? Generate xml 
file (xml)? Or both (both)? single

Enter field name to investigate: landuse

So you can see all these different fields. I chose to look at a single field (“landuse”) and the program will then look at the “landuse” record value for each shape, record its data type and save new record values:

searching for non-empty entry for landuse ...
data type found: str
Finding unique record values for landuse
1 of 212550 shapes ( 0.0 % )
 new record value: 
93 of 212550 shapes ( 0.04 % )
 new record value: reservoir
6782 of 212550 shapes ( 3.19 % )
 new record value: residential
110432 of 212550 shapes ( 51.95 % )
 new record value: grass
111094 of 212550 shapes ( 52.26 % )
 new record value: construction
Completed field name inspection 

---------------------------------------
Shapefile has the following field names
['osm_id', 'access', 'aerialway', 'aeroway', 'amenity', 'area', 
'barrier', 'bicycle', 'brand', 'bridge', 'boundary', 'building', 
'covered', 'culvert', 'cutting', 'disused', 'embankment', 'foot', 
'harbour', 'highway', 'historic', 'horse', 'junction', 'landuse', 
'layer', 'leisure', 'lock', 'man_made', 'military', 'motorcar', 
'name', 'natural', 'oneway', 'operator', 'population', 'power', 
'place', 'railway', 'ref', 'religion', 'route', 'service', 'shop', 
'sport', 'surface', 'toll', 'tourism', 'tower:type', 'tracktype', 
'tunnel', 'water', 'waterway', 'wetland', 'width', 'wood', 'z_order', 
'way_area', 'tags']

The field name landuse is str
and has 5 unique values
Display Values? (Y/N) Y
 possible values:
['', 'reservoir', 'residential', 'grass', 'construction']

As you can see from the output, there were 4 keywords (reservoir, residential, grass and construction) used to describe the ‘landuse’ field. So I could now write some code to go into a shapefile and extract only the shapes that have a ‘residential’ value for ‘landuse.’ But I couldn’t do that until I (1) knew that the landuse field existed and (2) knew the different definitions for landuse type.

So there it is! That’s the program. Hopefully all the shapefiles you ever download will be well-documented. But if you find one that’s not and you really need to figure it out, this little tool might help!

Some code notes and tips

The xml file that I create didn’t follow any particular standard or convention, just what I thought might be useful. Perhaps that could be improved?

REMEMBER THAT IN PYTHON, YOU NEED TO EXPLICITLY COPY LISTS! I stupidly forgot that when you make a list

list_a = list()
list_a.append('blah')
list_a.append('d')

And then want to make a copy of the list, if you do this:

list_b = list_a

Then any changes to list_b will change list_a. But if you do

list_b = list_a[:]

You’ll get a new copy that won’t reference back to list_a. This is probably one of the things that I forget most frequently with Python lists. Palm-smack-to-forehead. 

The XML API ElementTree was pretty great to work with. You can very easily define a hierarchy that will produce a nice xml tree (see this example). I did, however, have some trouble parsing the direct output from the type() function. When you calculate a type,

type(0.01)

you get this:

<type 'float'>

When I gave it directly to ElementTree (imported as ET here), like this:

ET.SubElement(attr, "attrtype",name="data type").text = type(0.01)

I would get some errors because of the quotation marks enclosed. To get around this, I converted the type output to a string, split it up by the quotes and took the index that would just be the type (int, str, or float):

ET.SubElement(attr, "attrtype",name="data type").text = str(type(0.01)).split("'")[1]

Mapping some things!

Since publishing my series of posts on manipulating shapefiles in Python (1, 2 and 3), I’ve been exploring different open data catalogs so I thought I’d share some of the maps I’ve mapped! All these were produced using scripts in my learning_shapefiles repository on GitHub (see denver_stack.py and denver_tree_canopy.py).

Downtown Denver

denver_downtown

MapZen is a pretty sweet service! You can draw a regional box then the folks at MapZen will take that box and extract all the OpenStreetMap data within the box then give you the shapefiles with all that info. First thing I (stupidly) did after downloading my data extraction from MapZen was to just plot all of the shapes… and I then proceeded to sit around for quite some time while my laptop chugged away and produced a highly detailed map of all the things in Denver County. I zoomed in to downtown Denver for the above image to show off the detail.

Metro-Denver

denver_road_rivers

A more abstract representation of the primary roadways in metro Denver. I figured out how the MapZen/OpenStreetMap shapefile was organized and only plotted motorways, primary and secondary roads (bright white in the map). I also created a grid containing the shortest distance to a roadway (using  the distance method available for shapely.geometry.LineString) and contoured the inverse distance (1/x) to evoke the topographic contours along rivers.

Tree Cover in Denver County

denver_1

Denver’s Open Data Catalog has a bunch of databases with shapefiles galore. I downloaded one for the Tree Canopy and then plotted up the percent tree cover in each geometry. This is what actually lead me to learn how to plot roadways… and here I overlaid the tree cover on the same MapZen extraction of OpenStreeMap data. Along with the roadways underneath, it forms a sort of abstract tree.

So that’s it for today. Three maps from a couple different open data sources using some inefficient Python. Not going to go into detail on the code this time, because, well, it’s slooow. I’m using modifications of the simple scripts that I used in my shapefile tutorials because I was excited to just plot things! But there are much better ways to handle shapefiles with hundreds of thousands of geometries. So until next time (after I figure out how to use fiona or geopandas), just enjoy the visuals!

Shapely Polygons: Coloring Shapefile Polygons

In my previous two posts, I showed how to (1) read and plot shapefile geometries using the pyshp library and (2) plot polygons using shapely and descartes. So the obvious next step is to combine the two! And that’s what I’ll cover today, again using my learning_shapefiles github repo along with the shapefile of state boundaries from census.gov.

The Final Map

In case you don’t care about the Python and are just curious about the end product, here’s the final map where the color of each state reflects its total land area:

shapefile_us_colored_by_area_sat

It’s kind of neat to see the gradient of state size from east to west, reflecting the historical expansion of the U.S. westward, but other than that, there’s not much to the map. But it does serve as a simple case for learning to manipulate shapefiles.

The Code

There are two scripts in learning_shapefiles/src of relevance for today’s post: basic_readshp_plotpoly.py and read_shp_and_rcrd.py. The first script is a simple combination of basic_read_plot.py and simple_polygons.py (from my previous two posts), plotting the shapefile geometries using polygons instead of lines, so let’s start there.

basic_readshp_plotpoly.py

The code starts out the same as basic_read_plot.py, but now also imports Polygon and PolygonPatch from shapely and descartes, before reading in the shapefile:

import shapefile
import numpy as np
import matplotlib.pyplot as plt
from shapely.geometry import Polygon
from descartes.patch import PolygonPatch

"""
 IMPORT THE SHAPEFILE 
"""
shp_file_base='cb_2015_us_state_20m'
dat_dir='../shapefiles/'+shp_file_base +'/'
sf = shapefile.Reader(dat_dir+shp_file_base)

The next part of the code plots a single geometry from the shapefile. This is super easy because shapefile.Reader reads a shapefile geometry as a list of points, which is exactly what the Polygon function needs. So we can just give that list of points directly to the Polygon function:

plt.figure()
ax = plt.axes()
ax.set_aspect('equal')

shape_ex = sf.shape(5) # could break if selected shape has multiple polygons. 

# build the polygon from exterior points
polygon = Polygon(shape_ex.points)
patch = PolygonPatch(polygon, facecolor=[0,0,0.5], edgecolor=[0,0,0], alpha=0.7, zorder=2)
ax.add_patch(patch)

# use bbox (bounding box) to set plot limits
plt.xlim(shape_ex.bbox[0],shape_ex.bbox[2])
plt.ylim(shape_ex.bbox[1],shape_ex.bbox[3])

And we get Washington, now as a colored polygon rather than an outline:

shapefile_single

Woo!

And as before, we can now loop over each shape (and each part of each shape), construct a polygon and plot it:

""" PLOTS ALL SHAPES AND PARTS """
plt.figure()
ax = plt.axes() # add the axes
ax.set_aspect('equal')

icolor = 1
for shape in list(sf.iterShapes()):

    # define polygon fill color (facecolor) RGB values:
    R = (float(icolor)-1.0)/52.0
    G = 0
    B = 0

    # check number of parts (could use MultiPolygon class of shapely?)
    nparts = len(shape.parts) # total parts
    if nparts == 1:
       polygon = Polygon(shape.points)
       patch = PolygonPatch(polygon, facecolor=[R,G,B], alpha=1.0, zorder=2)
       ax.add_patch(patch)

    else: # loop over parts of each shape, plot separately
      for ip in range(nparts): # loop over parts, plot separately
          i0=shape.parts[ip]
          if ip < nparts-1:
             i1 = shape.parts[ip+1]-1
          else:
             i1 = len(shape.points)

          polygon = Polygon(shape.points[i0:i1+1])
          patch = PolygonPatch(polygon, facecolor=[R,G,B], alpha=1.0, zorder=2)
          ax.add_patch(patch)

    icolor = icolor + 1

plt.xlim(-130,-60)
plt.ylim(23,50)
plt.show()

In order to distinguish each polygon, I set each shape’s color based on how many shapes have already been plotted:

R = (float(icolor)-1.0)/52.0

This grades the red scale in an RGB tuple between 0 and 1 (since there are 52 shapes), and it is then used in the facecolor argument of PolygonPatch. The coloring is simply a function of the order in which the shapes are accessed:

shapefile_us

The goal, however, is to color each polygon by some sort of data so that we can actually learn something interesting, and that is exactly what read_shp_and_rcrd.py does.

read_shp_and_rcrd.py

Up to now, we’ve only considered the shape geometry, but that is only one part of a shapefile. Also included in most shapefiles are the records, or the data, associated with each shape. When a shapefile is imported,

shp_file_base='cb_2015_us_state_20m'
dat_dir='../shapefiles/'+shp_file_base +'/'
sf = shapefile.Reader(dat_dir+shp_file_base)

The resulting shapefile object (sf in this case) contains records associated with each shape. I wasn’t sure what fields were included for the State Boundary shapefile from census.gov, so I opened up a Python shell in terminal, read in the shapefile then typed

>>> sf.fields

to get a list of available fields:

[('DeletionFlag', 'C', 1, 0), ['STATEFP', 'C', 2, 0], ['STATENS', 'C', 8, 0], ['AFFGEOID', 'C', 11, 0], ['GEOID', 'C', 2, 0], ['STUSPS', 'C', 2, 0], ['NAME', 'C', 100, 0], ['LSAD', 'C', 2, 0], ['ALAND', 'N', 14, 0], ['AWATER', 'N', 14, 0]]

Down towards the end, there’s an interesting entry

['ALAND', 'N', 14, 0]

Though I couldn’t find any documentation on the included fields, I suspected ALAND stood for land area (especially since it was followed by AWATER). So in read_shp_and_rcrd.py, the first thing I do is extract the field names and find the index corresponding the the land area:

""" Find max/min of record of interest (for scaling the facecolor)"""

# get list of field names, pull out appropriate index
# fieldnames of interest: ALAND, AWATER are land and water area, respectively
fld = sf.fields[1:]
field_names = [field[0] for field in fld]
fld_name='ALAND'
fld_ndx=field_names.index(fld_name)

I found this post helpful for extracting the fieldnames of each record.

Next, I loop over the records using the interRecords() object to find the minimum and maximum land area in order to scale the polygon colors:

# loop over records, track global min/max
maxrec=-9999
minrec=1e21
for rec in sf.iterRecords():
    if rec[4] != 'AK': # exclude alaska so the scale isn't skewed
       maxrec=np.max((maxrec,rec[fld_ndx]))
       minrec=np.min((minrec,rec[fld_ndx]))

maxrec=maxrec/1.0 # upper saturation limit

print fld_name,'min:',minrec,'max:',maxrec

I excluded Alaska (if rec[4] != ‘AK’:) so that the color scale wouldn’t be thrown off, and then I also scale the maximum (maxrec=maxrec/1.0) to adjust the color scale manually (more on this later).

Now that I know the max/min, I loop over each shape and (1) calculate the RGB value for each polygon using a linear scale between the max and min and then (2) plot a polygon for each shape (and all the parts of a shape) using that RGB value:

for shapeRec in sf.iterShapeRecords():
    # pull out shape geometry and records 
    shape=shapeRec.shape
    rec = shapeRec.record

    # select polygon facecolor RGB vals based on record value
    if rec[4] != 'AK':
         R = 1
         G = (rec[fld_ndx]-minrec)/(maxrec-minrec)
         G = G * (G<=1) + 1.0 * (G>1.0)
         B = 0
    else:
         R = 0
         B = 0
         G = 0

    # check number of parts (could use MultiPolygon class of shapely?)
    nparts = len(shape.parts) # total parts
    if nparts == 1:
       polygon = Polygon(shape.points)
       patch = PolygonPatch(polygon, facecolor=[R,G,B], edgecolor=[0,0,0], alpha=1.0, zorder=2)
       ax.add_patch(patch)
    else: # loop over parts of each shape, plot separately
       for ip in range(nparts): # loop over parts, plot separately
           i0=shape.parts[ip]
           if ip < nparts-1:
              i1 = shape.parts[ip+1]-1
           else:
              i1 = len(shape.points)

          # build the polygon and add it to plot 
          polygon = Polygon(shape.points[i0:i1+1])
          patch = PolygonPatch(polygon, facecolor=[R,G,B], alpha=1.0, zorder=2)
          ax.add_patch(patch)

plt.xlim(-130,-60)
plt.ylim(23,50)
plt.show()

One import thing not to miss is that on the first line, I loop over the iterShapeRecords iterable rather than using iterShapes. This is neccesary so that I have access to both shape geometry and the associated records, rather than just the shapes (iterShapes) or just the records (iterRecords).

Running the above code will produce the following map:

shapefile_us_colored_by_area

Because Texas is so much larger than the rest of the states, we don’t see much of a difference between the states. But we can adjust this by decreasing the max value using in the scaling. So after finding the max/min value, I set

maxrec=maxrec/2.0 # upper saturation limit

and end up with the following map that brings out more of the variation in the states’ land area (same map as in the very beginning of this post):

shapefile_us_colored_by_area_sat

Note that because I’m decreased the maxvalue for scaling, I had to ensure that the RGB value did not exceed 1, which is why I had the following lines limiting the green value (G):

    if rec[4] != 'AK':
         R = 1
         G = (rec[fld_ndx]-minrec)/(maxrec-minrec)
         G = G * (G<=1) + 1.0 * (G>1.0)

So that’s about it! That’s how you can read in a shapefile and plot polygons of each shape colored by some data (record) associated with each shape. There are plenty of more sophisticated ways to do this exercise, and I’ll be looking into some other shapefile Python libraries for upcoming posts.

Shapefiles in Python: shapely polygons

In my last post, I described how to take a shapefile and plot the outlines of the geometries in the shapefile. But the power of shapefiles is in the records (the data) associated with each shape. One common way of presenting shapefile data is to plot the shapefile geometry as polygons that are colored by some value of data. So as a prelude to doing just that, this post will cover how to plot polygons using the shapely and descartes libraries. As always, my code is up on my github page.

The two python libraries that I’ll be using are shapely (for constructing a polygon) and descartes (for adding a polygon to a plot). So step 0 is to go install those! I’ll also be using the numpy and matplotlib libraries, but you probably already have those.

Though the documentation for shapely has some nice sample source code, I wrote my own script, simple_polygons.py, to get to know the libraries better. In this approach, there are two steps to building a polygon from scratch: constructing the points that define the polygon’s shape and then mapping those points into a polygon structure. The first step doesn’t require any special functions, just standard numpy. The second step uses the  shapely.geometry.Polygon class to build a polygon from a list of coordinates.

There are limitations for valid polygons, but virtually any shape can be constructed, like the following pacman:

pacman

The first step is to build the list of coordinates defining the exterior points (the outer circle) and a list of interior points to exclude from the polygon (the eyeball). Starting with the exterior points, I calculate the x and y coordinates of unit circle from 0.25pi to 7/4pi (0 to 2pi would map a whole circle rather than a pacman):

theta = np.linspace(0.25*3.14,1.75*3.14,80) 

# add random perturbation 
max_rough=0.05 
pert=max_rough * np.random.rand(len(theta)) 

x = np.cos(theta)+pert 
y = np.sin(theta)+pert

I also add a random, small perturbation to each x-y position to add a bit of roughness to the outer pacman edge, because I wanted some small scale roughness more similar to the shapefiles I’d be plotting later. Next, I build a python list of all those x-y points. This list, ext, is the list of exterior points that I’ll give to shapely:

# build the list of points 
ext = list() 

# loop over x,y, add each point to list 
for itheta in range(len(theta)): 
    ext.append((x[itheta],y[itheta])) 

ext.append((0,0)) # add 0 point

At the end, I add the 0,0 point, otherwise the start and end points on the circle would connect to each other and I’d get a pacman that was punched in the face:

pacman_punch

That takes care of the exterior points, and making the list of interior points is similar. This list, inter, will be a list of points that define interior geometries to exclude from the polygon:

# build eyeball interior points 
theta=np.linspace(0,2*3.14,30) 
x = 0.1*np.cos(theta)+0.2 
y = 0.1*np.sin(theta)+0.7 

inter = list() 
for itheta in range(len(theta)): 
    inter.append((x[itheta],y[itheta])) 
inter.append((x[0],y[0]))

Now that we have the list of exterior and interior points, you just give that to shapely’s polygon function (shapely.geometry.Polygon):

polygon = Polygon(ext,[inter[::-1]])

Two things about passing Polygon the interior list: (1) you can actually pass Polygon a list of lists to define multiple areas to exclude from the polygon, so you have to add the brackets around inter and (2) I haven’t quite figured out the [::-1] that the shapely documentation includes. I know that generally, [::-1] will take all the elements of a list and reverse them, but why does Polygon need the points in reverse? No idea. Without it, I only get an outer edge defining the eyeball:

pacman_badeye

I would love to get some information on why Polygon needs the reversed list, so leave me a note in the comments if you know why.

Regardless, the next step is to add that polygon structure to a plot, with a straightforward use of matplotlib.pyplot (imported as plt) and descartes.patch.PolygonPatch:

 

# initialize figure and axes 
fig = plt.figure() 
ax = fig.add_axes((0.1,0.1,0.8,0.8)) 

# put the patch on the plot 
patch = PolygonPatch(polygon, facecolor=[0,0,0.5], edgecolor=[1,1,1], alpha=1.0) 
ax.add_patch(patch) 

# new axes 
plt.xlim([-1.5, 1.5]) 
plt.ylim([-1.5,1.5]) 
ax.set_aspect(1) 

plt.show()

PolygonPatch’s arguments are pretty self explanatory: facecolor and edgecolor set the colors for the fill and edge of the polygon. Conveniently, facecolor and edgecolor can be specified as RGB values, which I’ll take advantage of for plotting shapefile records in my next post. It can also accept any of the kwargs available to matplotlib.patches.Polygon class (like the transparency,alpha, between 0 and 1).

So that’s it! Pretty easy! And in some ways it is even easier to plot polygons from a shapefile, since pyshp imports shapefile coordinates as a list and you can just give that list directly to Polygon… more on that in the next post.

Shapefiles in Python: a super basic tutorial

I recently started a couple of projects that will involve using shapefiles and I got frustrated real fast. Many tutorials that I found assumed some previous knowledge of either shapefiles or the python libraries used to manipulate them. But what I wanted was a tutorial that helped me to plot a simple shapefile while getting to know what a shapefile actually is!

So here’s a SUPER simple example of how to load, inspect and plot a shapefile to make a map of the U.S! There are quite a few Python libraries dealing with shapefiles and it was hard to find the easiest place to start. I found the pyshp Python library the most approachable, so that’s what I use in the following example. There are many ways to visualize shapefiles in a more automated way than I do here, but I think that my approach here gives a clearer picture to a beginner of what a shapefile is and how to use Python with shapefiles.

The shapefile

Go get yourself a shapefile! The one I used (which will definitely work with my code below) is the lowest resolution state-level cartographic boundary shapefile from census.gov (link to census.gov, direct link to lowest resolution 20m .zip file). Once you download the .zip file, unpack it and take a look inside. A shapefile is actually a collection of different files, including a .shp file containing information on shape geometry (state boundaries in this case), a .dbf file containing attributes of each shape (like the name of each state) and others (check out the wiki page on shapefiles for a description of the other file extensions).

The code!

You can download my Python code: https://github.com/chrishavlin/learning_shapefiles

At present, the src folder includes only one python script: basic_read_plot.py. To run this script you will need to:

  1. install the pyshp Python library  (and numpy and matplotlib if you don’t have them already)
  2. edit the variables in the source code describing the path to the shapefile (dat_dir and shp_file_base in src/basic_read_plot.py)

After those two steps, just open up a terminal and run the script (assuming you’re in the src directory):

$ python basic_read_plot.py

The three plots described below should pop up.

So what does the code do? 

After the initial comment block and library import, the code reads in the shapefile using the string variables that give the location of the shapefile directory (data_dir) and the name of the shapefile without extension (shp_file_base):

sf = shapefile.Reader(dat_dir+shp_file_base)

This creates a shapefile object, sf, and the next few lines do some basic inspections of that object. To check how many shapes have been imported:

print 'number of shapes imported:',len(sf.shapes())

For the census.gov state boundary shapefile, this returns 52 for the 50 states, Washington D.C. and Puerto Rico.

For each shape (or state), there are a number of attributes defined: bbox, parts, points and shapeType. The pyshp documentation describes each, and I’ll touch on each one in the following (except for shapeType).

The first thing I wanted to do after importing the shapefile was just plot a single state. So I first pull out the information for a single shape (in this case, the 5th shape):

shape_ex = sf.shape(5)

The points attribute contains a list of latitude-longitude values that define the shape (state) boundary. So I loop over those points to create an array of longitude and latitude values that I can plot. A single point can be accessed with shape_ex.points[0] and will return a lon/lat pair, e.g. (-70.13123,40.6210). So I pull out the first and second index and put them in pre-defined numpy arrays:

x_lon = np.zeros((len(shape_ex.points),1))
y_lat = np.zeros((len(shape_ex.points),1))
for ip in range(len(shape_ex.points)):
    x_lon[ip] = shape_ex.points[ip][0]
    y_lat[ip] = shape_ex.points[ip][1]

And then I plot it:

plt.plot(x_lon,y_lat,'k')

# use bbox (bounding box) to set plot limits
plt.xlim(shape_ex.bbox[0],shape_ex.bbox[2])

single

This returns the state of Oregon! I also used the bbox attribute to set the x limits of the plot. bbox contains four elements that define a bounding box using the lower left lon/lat and upper right lon/lat. Since I’m setting the axes aspect ratio equal here, I only define the x limit.

Great! So all we need now is to loop over each shape (state) and plot it! Right? Well this code snippet does just that:

plt.figure()
ax = plt.axes()
ax.set_aspect('equal')
for shape in list(sf.iterShapes()):
   x_lon = np.zeros((len(shape.points),1))
   y_lat = np.zeros((len(shape.points),1))
   for ip in range(len(shape.points)):
       x_lon[ip] = shape.points[ip][0]
       y_lat[ip] = shape.points[ip][1]

   plt.plot(x_lon,y_lat)

plt.xlim(-130,-60)
plt.ylim(23,50)

And we can see some problems with the result:

bad_map

The issue is that in some of the shapes (states), the geometry has multiple closed loops (because of the islands in some states), so simply connecting the lat/lon points creates some weird lines.

But it turns out that the parts attribute of each shape includes information to save us! For a single shape the parts attribute (accessed with shape.parts) contains a list of indeces corresponding to the start of a new closed loop within a shape. So I modified the above code to first check if there are any closed loops (number of parts > 1) and then loop over each part, pulling out the correct index range for each segment of geometry:

plt.figure()
ax = plt.axes() # add the axes
ax.set_aspect('equal')

for shape in list(sf.iterShapes()):
    npoints=len(shape.points) # total points
    nparts = len(shape.parts) # total parts

    if nparts == 1:
       x_lon = np.zeros((len(shape.points),1))
       y_lat = np.zeros((len(shape.points),1))
       for ip in range(len(shape.points)):
           x_lon[ip] = shape.points[ip][0]
           y_lat[ip] = shape.points[ip][1]
       plt.plot(x_lon,y_lat)

    else: # loop over parts of each shape, plot separately
       for ip in range(nparts): # loop over parts, plot separately
           i0=shape.parts[ip]
           if ip < nparts-1:
              i1 = shape.parts[ip+1]-1
          else:
              i1 = npoints

         seg=shape.points[i0:i1+1]
         x_lon = np.zeros((len(seg),1))
         y_lat = np.zeros((len(seg),1))
         for ip in range(len(seg)):
             x_lon[ip] = seg[ip][0]
             y_lat[ip] = seg[ip][1]

         plt.plot(x_lon,y_lat)

plt.xlim(-130,-60)
plt.ylim(23,50)
plt.show()

And we can see those spurious lines are now gone:

good_map

Final Thoughts

Now that I feel pretty good about the information contained in a shapefile and how it’s stored, I’ll be moving on to more exciting visualizations. It’s important to note, that there are many Python libraries that can plot shapefiles without manually pulling out the points as I’ve done here. But I feel much better about using those fancier approaches now that I’ve gone through this exercise.

Also, in this post I’ve only touched on the geometry information in a shapefile. But it’s really the records included in the .dbf files that will make this an interesting visualization. The records contain measurements, observations or descriptions for each shape and that information can be used to color or fill each shape to create visualizations like this one (not my work).

Useful links: pyshp documentation, Plot shapefile with matplotlib (Stack Exchange)

Taxi Tracers: Speed and Hysteresis of NYC Cabs

In my previous post, I outlined how I took the NYC yellow cab trip record data and made some pretty maps of Manhattan. Given how many variables are tracked, there are many possible avenues of further exploration.  It turns out that the average speed of taxis holds some interesting trends and in this post I’ll describe some of these trends (code available at my GitHub page).

I started off with plotting average speed against time of day, hoping to see some trends related to rush hour. I took three months of yellow cab data (January, February and March 2016) and calculated the average speed of every ride using the pick up time, drop off time and distance traveled. I then calculated a 2D histogram of average speeds vs time of day:

speedhist2d

I also calculated the median speed (solid curve) and the first/third quartiles (dashed curve) at each time of day. As expected, rush hour is evident in the rapid drop in speed from 6-10.  I didn’t expect such a large peak in the early hours (around 5) and I also found it interesting that after the work day, speeds only gradually increase back to the maximum.

Next, I took the 2-D histogram, defined three representative time ranges and calculated 1-D histograms of the speed for those time ranges (left plot, below):

1dhistograms

These distributions are highly skewed, to varying degrees. The 5-6 time range (blue curve) sees a much wider distribution of speeds compared to the work day (9-18, green curve) and evening (20-24, red curve). Because of the skewness, I used the median rather than the mean in the 2-D histogram plot. Similarly, the standard deviation is not a good representation for skewed data, and so I plotted the first and third quartiles (the dashed lines in the 2-D plot).

The difference between the 1st and 3rd quartile (the interquartile range, or midspread), is a good measure of how the distribution of average cab speeds changes (plotted above, right, if the data were normally distributed, this plot would just be a plot of the standard deviation). Not only are taxis going faster on average around 5-6 am, but there’s a much wider variation in taxi speeds. Traffic speeds during the work day are much more uniform.

After playing with these plots for a bit, I got to thinking about how the speed of a taxi is only indirectly related to time of day! A taxi’s speed should depend on how much traffic is on the road (as well as the speed limit I suppose…), the time of day only reflects trends of when humans decide to get on the road. So if I instead plot the average speed vs the number of hired cabs, I get quite an interesting result:

speed_hyst_3_mo_a

Generally, the average speed decreases with increased number of cabs (increases with decreased cab count), but the changes are by no means linear. It’s easiest to read this plot by picking a starting time: let’s start at 6am, up at the top. At 6am, the average speed with about 1000 hired cabs on the road is about 23 mph. From 6am to 9am, the number of cabs increases, causing traffic to slow to about 12 mph. From 9am to 7pm, things get complicated (see the zoomed inset below), with the number of cabs and average speed bouncing around through the workday until 7pm, when the cab count drops and average speed rises in response. Eventually, the loop closes on itself, forming a sort of representative daily traffic cycle for Manhattan.

speed_hyst_3_mo_zoom

When I first looked at this, it immediately reminded me of hysteresis loops from all my thermodynamics classes. And so I started googling “Traffic Hysteresis” and it turns out there’s a whole field of study devoted to understanding hysteresis loops in traffic patterns in order to better design systems of roadways that can accommodate fluctuations in traffic volume (e.g., Zhang 1999, Geroliminis and Sun 2011Saberi and Mahmassani 2012 ). One of important differences between those studies and the brief exploration I’ve done here is that the above studies have counts of total traffic. The calculations here only consider the taxis floating in the larger stream of traffic. While these taxi tracers are still a good measure of average flow of traffic, the number of hired cabs at a given time is only a proxy for total traffic. A lot of the complicated behavior during the workday (9am-7pm) likely arises from  changes in traffic volume not accounted for like delivery trucks, commuters or buses. So in some ways it’s remarkable that I got a hysteresis loop at all!

The Code

I described most of the methodology for reading and manipulating the NYC yellow cab data in my previous post. The two main additions to the code are a new class in taxi_plotmod.py (binned_variable) and an algorithm for finding the number of hired taxis (find_N_unique_vs_t of taxi_plotmod.py). The binned_variable class returns a number of useful values including median, first quartile, third quartile and binned values. The method find_N_unique_vs_t loops over a given time range and counts the number of taxis that are hired during that time.

A few useful tips learned during the process:

PyTip! Calculating the first and third quartiles. This was quite simple using the numpy’s where method. The first quartile is defined as the median of all values lying below the median, so for an array of values (var2bin), the first quartile at a given time of day (designated by the i_bin index) can be calculated with:

firstquart[i_bin]=np.median(var2bin[np.where(var2bin<self.med[i_bin])]) 

The where command finds all the indeces with values less than the median at a given time of day, so var2bin[np.where(var2bin<self.med[i_bin])] selects all values that are less than the median (self.med[i_bin]) from which a new median is calculated.

PyTip! Converting to datetime from numpy’s datetime64. So at one point, I found myself needing to know the number of days between two dates. I had been importing the date of each taxi ride as a numpy datetime64 type to make selecting given date ranges with conditional statements easier. Both datetime and numpy’s datetime64 allow you to simply subtract two dates. The following two commands will give the number of days between two dates (assuming numpy has been imported as np and datetime as dt):

a=dt.datetime(2016,4,3)-dt.datetime(2015,3,1)
b=np.datetime64('2016-04-03')-np.datetime64('2015-03-01')

Both methods will give 399 days as the result, but I needed an integer value. The datetime64 method returned a weird I-don’t-know-what type of class that I couldn’t figure out how to convert to an integer value easily.  The easiest solution I found was to use the .astype method available with the datetime64 class:

date_end=np.datetime64('2016-04-03')
date_start=np.datetime('2015-03-01')
Ndates=date_end.astype(dt.datetime)-date_start.astype(dt.datetime)

Sp what’s happening here? Well date_end and date_start are both numpy datetime64 variables. date_end.astype(dt.datetime) takes the end date and casts it as a normal datetime value. So I do that for both the start and end date, take the difference and I’m left with Ndates as a datetime timedelta, which you can conveniently get an integer date range using

Ndates.days

So that’s that!

Stay tuned for either more taxi analysis or some fun with shapefiles!

Taxi Tracers: Mapping NYC

The city of New York has been recording and releasing trip data for every single taxi ride in the city since 2009 (TLC Trip Data, NYC) and these publicly available datasets are treasure trove of information. Every single time a NYC taxi picks up a passenger, 19 different pieces of information are recorded, including gps coordinates of pickup and dropoff, total fare and distance traveled. Think about how many cabs you see while walking the streets of New York and then imagine how many times in a day just one of those cabs picks up a passenger… yeah, that’s a lot of data. A single data file from  TLC Trip Data, NYC contains data for one month between 2009-2016 and weighs in at a  MASSIVE 2 GB.

There have already been plenty of cool visualizations and analyses of this data set, Todd  W. Schneider processed the ENTIRE data set from 2009-2015, amounting to over a billion taxi trips (link) and found lots of interesting trends. His comparison of yellow and green taxis to Uber rides in individual boroughs (link) is particularly interesting. Chris Wong created a visualization that follows a randomly selected cab through a day of driving (link) and Juan Francisco mapped over 10,000 cabs in a 24 hour period to create a mesmerizing video (link). Both used the start and end points from the TLC Trip data and created a best guess route via the Google Maps API. It’s pretty fun to watch the taxis jump around the city, especially if you happen to have just read Rawi Hage’s novel Carnival, in which the main character, a cab driver, describes one type of cabbie as “the flies”, opportunistic wanderers floating the streets waiting for passengers and getting stuck in their own web of routine.

In my own exploration of the data, I’ve been following two main threads. The first, which I’ll describe today, is map-making using gps coordinates of the taxis. The second, which I’ll leave to a future post, involves a more statistical analysis of the data focused on understanding average taxi speed. I’ll start by describing the final product, then dig into the details of my coding approach (python source code is available on my github page here).

Taxi Maps (the final products)!

Just look at the map, it’s beautiful:

taxi_count_map_maxval_labeled

This map was made by creating a rectangular grid spanning New York City and counting how many pickups occurred within each grid bin (more details on this below) within three months. The color scale is the base-10 log of the number of pickups in each bin (yellow/white are a lot of pickups, red are a lower number and black are no pickups). The horizontal direction (longitudinal) is divided into 600 bins, while the vertical (latitudinal) is divided into 800 bins, a grid fine enough to resolve individual streets.

The level of detail visible in this map is impressive. My eye was initially drawn to the odd half-circle patterns in the East Village, which it turns out is Stuyvesant Village. It’s also odd that the bridges to Brooklyn and Queens show up at all. I’m pretty sure it’s impossible for cabs to pick up anyone on one of these bridges. Perhaps the presence of the bridges arises from uncertainty in the GPS coordinates or cabbies starting their trip meter after getting on the bridge? Hard to say. But it’s kinda neat! The other very clear observation is the drop in cab pickups in Manhattan around the north end of Central Park. The NYC Taxi and Limo Commission observed this same feature and calculated that 95% of pickups in Manhattan occurred south of this boundary, which led to the creation of the green Boro taxis in order to provide more equitable access to taxis (more info here and here).

In addition to simply mapping taxi pickup location, some interesting trends are visible by changing the color scale to reflect different variables. The following map takes the same pickup locations, but colors each grid point by the average distance traveled away from the pickup location:

trip_distance_vs_lat

The map is fairly uniform in color, except for around Wall Street and the southern tip of Manhattan, where the more yellow coloration shows a generally higher trip distance. It’s hard to pick off numbers from the color scale, so I took the map and created a modified zonal average; at each latitude, I averaged across the longitudinal grid points, only using grid points with non-zero values, and then plotted the result (above, right). The line plot shows pretty clearly that cabs picking up passengers around Wall Street tend to go about 2 miles farther than trips originating anywhere else. Why is this? Hard to say without a lot more analysis, but 4 to 4.5 miles happens to be about the distance up to Penn Station or Grand Central Station, so I suspect commuters who don’t mind spending extra money to avoid the subway on their way to the train.

In the following I’ll dive into the horrid detail of how I made the above maps, read on if you dare! Future posts will look into some more complicated analysis of the yellow cab data, beyond simple visualization, so stay tuned for that…

Data Description

The datafiles at TLC Trip Data, NYC are HUGE comma separated value (csv) plain text files, each weighing in at about 2 GB. A single file covers all entries for a single month and contains 19 entries per line. A single line in the csv file looks like:

1,2015-03-06 08:02:34,2015-03-06 08:14:45,1,3.80,-73.889595031738281,40.747310638427734,1,N,-73.861953735351563,40.768550872802734,2,14,0,0.5,0,0,0.3,14.8

The TLC yellow cab data dictionary, describes what each entry is, but the ones I find most interesting are: pickup time (entry 2), dropoff time (entry 3), passenger count (entry 4), trip distance (entry 5), pickup and drop off longitude and latitude coordinates (entries 5, 6, 9 and 10, respectively), total fare (entry 12) and tip (entry 15).  So in most basic terms, all I’ve done is read each line of the file, pull out the entry of interest along with its gps coordinates and then find a way to plot that aggregated data.

Source Code, Tricks and Tips

The source code (available here) is divided into two python modules: taxi_main.py and taxi_plotmod.py. The first deals with reading in the raw .csv files while the second contains routines for processing and plotting that data. The source includes several a README and several scripts that show examples of using modules, so I won’t go into that here. Rather, I’ll describe my general approach along with several problems I ran into (and their solutions).

Reading the data

Reading in a single csv data file is relatively straightforward. The standard open command (open(filename, ‘r’)) allows you to read in one line at a time. The functions  of interest are in taxi_main.py: read_taxi_file loops over all of the csv files in a specified directory and for each file calls read_all_variables. I did add some fancy indexing in the form of a list of strings that identify which of the 19 variables you want to save (see the Vars_To_Import variable) so that you don’t have to import all the data. Check out the main function in taxi_main.py to see an example of using the commands to read in files.

One  of the design decisions I made early on was to read in the data then store it in memory before processing it. Meaning there is a big array (Vars or VarBig in taxi_main.py) that stores all of the imported data before feeding it to the routines for spatially binning the data (see below). This seriously limits the scalability of the present code:  for the 3 months of data used in the above maps, I was using up 30-45% of my laptop’s memory while reading and processing the data. Including more months of data would use up the remaining RAM pretty quick. The way around this problem would be to write some routines to do that spatial binning or other processing on the fly, so that the whole data set wouldn’t need to be stored. The advantage of holding the data set in memory, however, is that I can easily process and re-process that data multiple times without the bottle neck of reading in the files. So I stuck with the present design rather than rewriting everything…

Spatially binning point data

To create spatial maps and images, I had to go from point data to a gridded data product. This is a common practice in geo-located data sets (data sets that have latitude, longitude and some associated measurement at those coordinates) and though there are many complicated ways to carry out this process, the concept is simple:

gridded_product_concept

Conceptual cartoon of how to make a gridded data product from distributed point data

Each data point has a latitude, longitude and associated measurement (the color of the dots in the case of the conceptual image above) and the data points are spatially scattered according to their coordinates. To make a gridded data product, I first specified a latitude/longitude grid (dashed lines, left). I then find which points lie within each bin, aggregate the measurements within a bin and then store that new single value in the bin. In this conceptual case, I’ve averaged the color of the data points to define the new gridded data product. In the NYC taxi maps shown above, the color is first determined by simply the total number of measurements within a bin and then the average of all the trip distance measurements within a bin.

Since the reading of the raw data file and subsequent spatial binning can take a few minutes, the code can write out the gridded data products (see the main function in taxi_main.py). These files are only a few MB in size, depending on the resolution of the grid, so it’s much easier to repeatedly load the gridded data products rather than re-processing the dataset when fine tuning the plotting.

pyTip: where. One of the useful python tricks I learned in this is the numpy routine where that identifies the indeces of an array that meet a specified criteria. Say you have a giant array X with values between 0 and 100. If you want to find the mean of X, but only using values less than 50, you could use np.mean(X[np.where(X<50)]).

pyTip: looping efficiency. Less of a python tip and more of a computational tip, but there are several ways to write a routine that spatially bins point data. The first approach that I stupidly employed was to loop over each bin in the spatial grid, find the points that fall within those bins and then aggregate them. If N is the total number of data points, Nx is the number of bins in the x dimension and Ny is the number of bins in the y dimension, then in psuedo-code, this looks like

 for ix in 1:Nx
     for iy in 1:Ny
 
      Current_Grid_Values = All data points within Lat_Lot[ix,iy]
      GridValue[iy,ix]=mean(Current_Grid_Values)

Why is this approach so bad? Well, it involves searching through the entire data set of N values each time through the loop. N is huge (the smallest cases I ran have N > 50,000, for the maps above, N>1000000), so for a fine grid like in the maps above (where Nx = 600, Ny=800), this is incredibly slow.

A much better way to do it is to loop over N, find the corresponding bin and aggregate the measurements within that bin (again, this is psuedo-code, check out the real source code, map_proc within taxi_plotmod.py to see the working python code):

 for iN in 1:N
      Measurement = all_data_points[iN]
      Current_Lat_Lon = Lat_Lot[iN]
      find ix,iy that contains the Current_Lat_Lon 
      GridValue[iy,ix]=GridValue[iy,ix] + Measurement
      GridCount[iy,ix]=GridCount[iy,ix] + 1

After the loop, we can find the mean value of measurements within each binned value by normalizing by the total number of measurements within each bin:

 GridValueMean=GridValue / GridCount

A simple switch, but it scales to large values of N, Nx and Ny much better. The above maps took only a few minutes to process using the second (good) approach, while when I initially tried using the first (stupid) approach, the code was still running when I killed it after 30 minutes.

Plotting the spatially binned data: The plotting makes use of the matplotlib module, specifically the pcolormesh function. I went through a lot of color maps and ended up settling on ‘hot.’ Gives the maps an almost electric feel.  Not much else to say about the plotting, pretty standard stuff. Check out plt_map within taxi_plotmod.py if you want the detail.

Stay tuned for more exploration of the NYC taxi data!