Shapely Polygons: Expansion!

Today I found myself needing to scale a large number of geometric shapes corresponding to volcanic fields in the Western United States. Shapely (used in previous posts) makes this easy with the shapely.affinity.scale() method (an affine transformation preserves lines and points, and the scale() method is a simplified transformation without adding rotation or skew).

So I put together a test script using the scale() method (on my github here) that takes the scaling factor as input. Since I’ll be using this with geometric coordinates and will be testing sensitivity of measurements to different sized shapes, I added the option to scale the shape by a a sequences of distances (angular or linear). The code is fairly straight forward and I don’t have time for a detailed post, so for now, here’s a trippy plot showing a squiggly-circle shape scaled by different amounts:

affine_expand

To generate the above plot (assuming you’ve cloned my learning_shapefiles repo and are in the directory containing affine_expansion.py).:

python affine_expansion.py -f ../data/expansion_test_data.csv -exp 0.25,.5,.75,1
.25,1.5,1.75,2

 

merging shapes and plotting the physiographic boundary of the Colorado Plateau

Today I found myself needing to plot the physiographic boundary of the Colorado Plateau in Python. It’s been a while since I’ve touched on shapefiles (or anything on the blog) so I figured I’d write a quick blurb on reading and plotting this particular shapefile.

Data: shapefile of data from  Fenneman and Johnson 1946 [1] available at https://water.usgs.gov/GIS/dsdl/physio_shp.zip

Code to load & plot & write processed data: colorado_plateau.py

Python requirements: pyshp, shapely, matplotlib

What you’ll learn: reading shapefiles, merging polygon shapes in Python with shapely

The Data

The first challenge was finding the actual lat/lon coordinates defining the edge of the Colorado Plateau… it’s amazing how many papers in geology/geophysics plot the boundary but don’t actually reference where the heck they got their coordinates from. After much digging I FINALLY found a paper that actually cited their source: Hopper and Fischer 2018 [2] reference a 1946 publication by Fenneman and Johnson [1] titled “Physiographic divisions of the conterminous U. S.” and after a quick search I found the digitized data from that publication online at water.usgs.gov.

Here’s the summary page containing metadata: https://water.usgs.gov/GIS/metadata/usgswrd/XML/physio.xml

and a direct link to the zipped shapefile:  https://water.usgs.gov/GIS/dsdl/physio_shp.zip.

The dataset contains a large number of physiographic regions and the Colorado Plateau is subdivided into multiple regions, so the code below pulls out the regions within the Colorado Plateau and joins them into a single shape defining the full boundary. To run the code below, unpack physio_shp.zip wherever you downloaded it to and rename the folder to physio (to match expectations for the pyshp shapefile reader).

The Code

The full code is here.

The XML data for the shapefile defines a province code for different provinces, for which the Colorado Plateau sub-regions have a value of 21. So the code (1) reads the shapefiles, (2) finds the shapes with a province code of 21 and (3) combines them.

Step 1:  imports, reading arguments, reading the shapefile.

shapefile is the library for pyshp, otherwise pretty self explanatory:

import shapefile, os,sys
import matplotlib.pyplot as plt
from shapely.geometry import Polygon
from shapely.ops import cascaded_union

# read the arguments
fname=sys.argv[1] # path to physio.shp
outfolder=None
if len(sys.argv)>2:
    outfolder=sys.argv[2] # folder to store output

# read the shapefile
sf = shapefile.Reader(fname)

Step 2: Find the Colorado Plateau shapes.

The shapes are described in the records list of the shapefile object:

sf.records()

records() is a list of attributes for each shape and a single record looks like

[3.886, 9.904, 220, 15, 212, '21b', 'INTERMONTANE PLATEAUS', 'COLORADO PLATEAUS', 'UINTA BASIN', 21]

The final value is the province code — so we just need to save off the indeces for which that value is 21. It turns out the 3rd value in the record list is actually a cross-reference to a shape ID, but for some reason the indexing is offset by 2 when reading this shapefile with python. So the shape data for this shape would be accessed with:

sf.shapes()[218]

rather than 220. Not sure why it’s off by 2 (would expect it to be off by 1 due to python indexing), but in any case, my code simply records the list index as python sees it:

# find the record indeces for colorado plateau (province ID = 21)
i_rec=0
recs_to_plot=[]
for rec in sf.records():
    if rec[-1]==21:
        print(rec)
        print(i_rec)
        recs_to_plot.append(i_rec)
    i_rec=i_rec+1

# plot the individual records
plt.subplot(1,2,1)
for rec in recs_to_plot:
    pts=sf.shapes()[rec].points
    lons=[pt[0] for pt in pts]
    lats=[pt[1] for pt in pts]
    plt.plot(lons,lats,'.k')

As seen above — the coordinates for the shape boundaries for a record are in

sf.shapes()[rec].points

which is a list of longitude and latitude points (which the code unpacks for plotting). This section of code will generate the following outline of the Colorado Plateau regions:

Step 3: merging shapes

This is the fun bit! What we want is just the outer boundary of the union of all the shapes. The python library shapely lets us do this very easily by creating a list of shapely Polygon objects then combining them with the cascaded_union method:

# create a single shape for Colorado Plateau from union of sub-shapes
polies=[]
for rec in recs_to_plot:
    polies.append(Polygon(sf.shapes()[rec].points))
CP_bound=cascaded_union(polies)

# plot the exterior shape
lon,lat = CP_bound.exterior.xy
plt.subplot(1,2,2)
plt.plot(lon,lat,'.k')

and the resulting plot of just the exterior boundary:

Step 4: output the processed data 

The code also exports the lat/lon points defining that exterior boundary with:

# export the final shape as a CSV of boundary points
if outfolder is not None:
    f=open(os.path.join(outfolder,'ColoradoPlateauBoundary.csv'),'w')
    f.write("lon,lat\n")
    for i,j in zip(lon,lat):
        f.write(str(i)+","+str(j)+"\n")
    f.close()

I could have written some code to save the data in a shapefile format, but for such a small amount of data I find it easier to save a CSV and just create a Polygon from the list of points as I need it. I’m actually planning to create a Polygon that will be combined with GeoPandas to find sets of points falling within the plateau (GeoPandas lets you do database joins on geospatial data, it’s awesome!).

Running the Code

To run the code:

python colorado_plateau.py /path/to/physio/physio.shp /folder/for/output/

where the first argument is the path to the downloaded and unpacked shapefile and the second argument is the location to save the CSV file (this argument is optional — no data will be saved if not included).

References

[1] Fenneman, N. M., & Johnson, D. W. (1946). Physiographic
divisions of the conterminous U.S. Reston, VA: US Geological Survey,
Physiographic Committee Special Map. https://water.usgs.gov/GIS/metadata/usgswrd/XML/physio.xml

[2] Hopper, E., & Fischer, K. M. (2018), The changing face of the lithosphere-asthenosphere boundary: Imaging continental scale patterns in upper mantle structure across the contiguous U.S. with Sp converted waves. Geochemistry, Geophysics, Geosystems, 19 , 2 593 – 2 614 . https://doi.org/10. 1029/2018GC007476

Bikes! Part 0

Some months ago I discovered that there are a number of bike shares out there that make their data publicly available, and I’ve been meaning to download some of it and poke around. Well today I finally had some time. And though I’m not sure what I’ll be doing with this data set yet, I wanted to share a figure I made in my initial exploration.

The following figure shows the number of rides per day and the median ride distance for the Portland OR bike share (data from BikeTown: https://www.biketownpdx.com/system-data). I threw the code in a new github repo here so you can take a look if inerested, but I’m not going to go into detail yet (the code just downloads their system data files, concantenates the monthly files and does some minimal processing and plotting). In any case, the figure:

The neat (and maybe unsurprising) feature is the strong seasonality to bike share usage, both in terms of just the number of rides per day (high in summer, low in winter) and the median distance of each ride (longer rides in summer). There is an interesting spike in total rides around May 2018 — maybe excitement for springtime? or additional bikes added to the program? A plot of bike usage percent (total rides / available bikes) might be more illustrative.

So that’s that for now. Hopefully won’t be too long before I have time for some more in depth analysis.

Bikes!

 

Great Circle Paths

Been quite a while since any updates, but here’s a short one!

As a part of a contract I’m working on, I found myself having to plot the major arc of great circle paths on a map. But if you google “how to plot great circle path in [insert python library here]” all the solutions are for plotting minor arcs. Turns out in the end, there’s a really simple trick to plotting the major arc (and I felt pretty dumb when I realized it after wasting a ton of time), but I figured I’d write it up here in case it saves anyone else a bit of time. The short answer: given two points that you want the major arc for, just add the antipodes for each point to the list of points in your path.

First off, in case you need a review of great circles, here’s a globe:

Given two points on the surface of a sphere, there is a single circle that that contains both points (unless you’re at a pole, in which case there are infinite great circles). The short way round is the minor arc (red curve), the long way round is the major arc (green curve). And I needed to plot both of them.

The reason I got into plotting this in the first place is that in seismology, surface waves are described by major and minor arcs. When an earthquake generates seismic waves and is measured at a seismometer somewhere else, the raypath between the epicenter and seismic station falls on a great circle path. And surface waves are referred to in terms of the minor and major arcs: the R1 wave travels the minor arc and the R2 travels the major arc. These waves will actually keep going around the earth’s surface before dissipating: R3 is the R1 after it goes around again, R4 is the R2 after it goes around again, and on and on. So I needed to be able to plot all these.

Ok, so back to plotting…

I was using the plotly library in Python for this plot, so I’ll stick with that for examples here, but there should be similar functions in whatever mapping library you’re using. The full script is on my github page here.

So the important bit is just defining the list of latitude and longitude points. Here, the minor arc points are put into a dictionary:

paths={}
paths['minor_arc']={'lon':[ start_lon, end_lon ],
                    'lat':[start_lat,end_lat], 'clr':'red','dash':None}

When we give this to plotly, we’ll tell it to connect the two points, which will give us the shortest path between the two, the minor arc.

To plot the major arc, we just need to add some points between the start and end so that it takes the long way around. But how to choose the points? Well, turns out that there are tons of confusing pages out there on the trig used for calculating great circle paths, and I almost started to code up some of it… until I realized only needed a couple points. And the antipodes (the point that is exactly opposite a given point on the surface) are both real easy to calculate and  guaranteed to lie on the great circle path. Just add 180 to the longitude and flip the sign of the latitude:

ant1lon=start_lon+180
if ant1lon>360:
    ant1lon= ant1lon - 360
ant1lat=-start_lat

Same for the antipode of the second, end point. The >360 bit is just to make sure longitude remains between 0 and 360 degrees.

And now, we can put the antipodes in a list for the major arc:

paths['major_arc']={'lon':[ start_lon,ant2lon,ant1lon, end_lon ],
                    'lat':[start_lat,ant2lat,ant1lat,end_lat], 
                    'clr':'green','dash':None}

In the script, these paths are then added as a data dictionary used in creating the plotly figure:

DataDict=list()
for path in ['minor_arc','major_arc']:
    DataDict.append(
        dict(
            type = 'scattergeo',
            lon = paths[path]['lon'],
            lat = paths[path]['lat'],
            name= path,
            mode = 'lines',
            line = dict(
                width = 2.5,
                color = paths[path]['clr'],
                dash=paths[path]['dash'],
            ),
            opacity = 1.0,
        )
    )

figdata={}
figdata['data']=DataDict

The full script has a bit more where it actually plots the data (to give the image above), but plotly has some really nice tutorials for that already so I won’t bother explaining all that.

So that’s that! Hope it saves someone else some time.

 

Quick tutorial on Geocoding with Python

I recently found myself needing to get the latitude/longitude of a list of cities (for this map here) and it turns out, it’s pretty easy now that I know how to do it. Here’s a quick tutorial!

Ok, so the process of taking a city and assigning a latitude/longitude point is called geocoding. There are many services that offer this (e.g., Google or Bing Maps APIs) but most I looked at seemed overkill for a one-time task of assigning lat/lon to about 500 cities. But then I discovered OpenStreetMap’s Nominatim!  You can modify the http address to return results in an xml file. For example,  the following searches for Providence, RI:

https://nominatim.openstreetmap.org/search.php?q=Providence+RI+USA&format=xml

 

And returns:

<searchresults timestamp="Thu, 02 Feb 17 16:17:00 +0000" attribution="Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright" querystring="Providence RI USA" polygon="false" exclude_place_ids="158799064,159481664" more_url="https://nominatim.openstreetmap.org/search.php?format=xml&exclude_place_ids=158799064,159481664&accept-language=en-US,en;q=0.8&q=Providence+RI+USA">
<place place_id="158799064" osm_type="relation" osm_id="191210" place_rank="16" boundingbox="41.772414,41.861571,-71.4726669,-71.3736134" lat="41.8239891" lon="-71.4128342" display_name="Providence, Providence County, Rhode Island, United States of America" class="place" type="city" importance="0.80724054252736" icon="https://nominatim.openstreetmap.org/images/mapicons/poi_place_city.p.20.png"/>
<place place_id="159481664" osm_type="relation" osm_id="1840541" place_rank="12" boundingbox="41.7232498,42.0188529,-71.7992521,-71.3177699" lat="41.8677428" lon="-71.5814833" display_name="Providence County, Rhode Island, United States of America" class="boundary" type="administrative" importance="0.58173948152676" icon="https://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png"/>
</searchresults>

If you scroll to the right you’ll see:

lat="41.8239891" lon="-71.4128342"

It’s pretty easy to write a python script to request then parse the xml result for lat and lon.  Here’s what that might look like (BUT DON’T DO THIS):

import urllib2

city='Providence, RI'
city_search=city.replace(' ','').split(',') # removes whitespace, splits city/state

# build the http address:
# (results in a string: 'https://nominatim.openstreetmap.org/search.php?q=Providence+RI+USA&format=xml')
osm='https://nominatim.openstreetmap.org/search.php?q=' 
fmt='+USA&polygon=1&format=xml'
srch = osm + city_search[0] + '+' + city_search[1] + fmt

# now use urllib2 to open the url and store the result:
response = urllib2.urlopen(srch)
the_page = response.read().split()

# and now we can parse the resulting string array where the xml info is stored.
# the it only stores the first Lon/Lat that it encounters

Lon = 0.0
Lat = 0.0
for iel in range(0,len(the_page)): # loop over the strings in the_page, look for Lat/Lon
    if 'lon=' in the_page[iel] and Lon == 0.0:
        Lon=float(the_page[iel].split("'")[1])
    if 'lat=' in the_page[iel] and Lat == 0.0:
        Lat=float(the_page[iel].split("'")[1])

So. Why not just loop over your list of cities and repeat this exercise? Well if you check out Nomanatim’s documentation page, and take a look at the usage policy, it requires: “(1) No heavy uses (an absolute maximum of 1 request per second). (2) Provide a valid HTTP Referer or User-Agent identifying the application (stock User-Agents as set by http libraries will not do). (3) Clearly display attribution as suitable for your medium. (4) Data is provided under the ODbL license which requires to share alike (although small extractions are likely to be covered by fair usage / fair dealing).” While I don’t think that my case of simply geocoding 500 or so cities falls under heavy usage and I could just delay my successive calls, I decided to look into their suggestions for other options.

In the end I settled on MapQuest’s implementation of Nominatim. It provides access to all the OpenStreetMaps data (still open source and subject to the OSM license agreements) and a MapQuest free developer account gets you 15,000 request/month for free. Waaay more than I’d need for this project.

So to geocode a list of cities, first sign up for a MapQuest Developer Account. You’ll get an API key assigned to you. Unlike some other API’s, MapQuest doesn’t use any fancy authentication. You basically just make a request for the URL with the API in the http address. Reaaaaally easy (but not exactly secure).

Then you can run a code very similar to that above. My implementation is here: look_up_latlons.py. But it’s kind of tied to the data that I was mapping.

Some notes on the code.

(1)  the API key is passed in through a command line argument, so when you run this code you have to type

$ python look_up_latlons.py AL1243KSFD242332552134KLJ

where that long string of letters/numbers is whatever your API key is.

(2) And then the formatting of the http address is slightly different from the standard Nominatim api. The same search for Providence RI  looks like:

 http://open.mapquestapi.com/nominatim/v1/search.php?key=API_KEY&format=xml&q=Providence+RI

where API_KEY is, again, your API key.

(3) In my implementation, I have imported a CSV file as a pandas dataframe (called Counts). Each row contains a city name along with the number of people who marched in the Women’s Marches on Jan. 21. The meat of the code is copied below, in which I iterate over the rows in the dataframe (named Counts here), find the lat/lon for each row (i.e., each city) and then store that lat/lon in a new dataframe (NewCounts) because it’s bad to modify an existing dataframe while iterating over it. Here’s what that looks like:

 osm='http://open.mapquestapi.com/nominatim/v1/search.php?key='+API_KEY+'&format=xml&q='

 # loop over cities in crowd counts, find Lat/Lon
 NewCounts=Counts.copy()
 NewCounts['lon']=np.zeros(len(Counts)) # add new column for lon
 NewCounts['lat']=np.zeros(len(Counts)) # add new column for lat
 for index, row in Counts.iterrows():

     srch=osm+str(row['City']).replace(' ','+')

     print '\n\nLooking up lat/lon for',row['City'],index
     time.sleep(dt) 
     response = urllib2.urlopen(srch)
     the_page = response.read().split()
 
     for iel in range(0,len(the_page)):
         if 'lon=' in the_page[iel] and NewCounts['lon'][index]==0.0:
            NewCounts['lon'][index]=float(the_page[iel].split("'")[1])
         if 'lat=' in the_page[iel] and NewCounts['lat'][index]==0.0:
            NewCounts['lat'][index]=float(the_page[iel].split("'")[1])

      print row['City'],NewCounts['lon'][index],NewCounts['lat'][index]

The MapQuest API didn’t have any specific usage constraints for how frequently you make a request, just overall number in a month, but I added a small delay between calls using the time.sleep() function anyway.

That’s all for now, hopefully some more posts with colorful plots coming soon!

 

A Python tool for inspecting shapefiles

In my recent coding exploits, I’ve downloaded lots of different shapefiles. Most shapefiles were accompanied by nice .xml documenation with information about the data and how its stored or labeled, but a few had hardly any information at all. I knew the general content based on the description from the website were I downloaded the shapefile, but I didn’t know what they had used for the record labels and I didn’t know what the record values were exactly. So the past couple days I sat down and wrote a bit of code to help in unraveling a myserious shapefile…

Check out (and/or download) the full Python source here: shapefile inspection!

The program is fairly straightforward. It traverses the records of a shapefile, recording the record label (or “field names” as I refer to them in the source) and information about each record. One of the program’s methods uses the Python XML API called ElementTree to  produce an xml file that you can load in a browser. Here’s a screen shot from using Firefox to view the xml file produced when running the program on the Open Street Map shapefile that I extracted via MapZen for my previous post.

xml_sample_1

In a browser, you can shrink or expand the xml attributes to get some basic information about each record: the name or label of the records, the data type and some sort of sample of the data. If the record data is an integer or float, then the sample will be the min/max values of the record while if it’s a string, it will either be a list of the unique strings in the records or just a sample of some of the strings. The OpenStreetMap shapefile contained some record values that were keywords, like the “highway” attribute in the screen shot above. While other records were strings with unique values for each shape, like the “name” attribute below:

xml_sample_2

In addition to generating an xml file, the program allows you to interactively explore a field.

When you run the program from command line (type in python inspect_shapefile.py in the src directory), it’ll ask for your input. It first asks if you want to give it a shapefile, here I said no and used the shapefile hardwired into __main__ of inspect_shapefile.py:

Do you want to enter the path to a shapefile? (Y/N) N
 
Using shapefile specified in __main__ :
directory: ../../learning_shapefiles/shapefiles/denver_maps/grouped_by_geometry_type/
filename: ex_QMDJXT8DzmqNh6eFiNkAuESyDNCX_osm_line

 Loading shapefile ...
... shapefile loaded! 

It then pulls out all the fields in the shapefile records, displays them and asks what you want to do. This is what it looks like using the OpenStreetMaps shapefile:

Shapefile has the following field names
['osm_id', 'access', 'aerialway', 'aeroway', 'amenity', 'area', 'barrier', 'bicycle', 
'brand', 'bridge', 'boundary', 'building', 'covered', 'culvert', 'cutting', 'disused', 
'embankment', 'foot', 'harbour', 'highway', 'historic', 'horse', 'junction', 'landuse', 
'layer', 'leisure', 'lock', 'man_made', 'military', 'motorcar', 'name', 'natural', 
'oneway', 'operator', 'population', 'power', 'place', 'railway', 'ref', 'religion', 
'route', 'service', 'shop', 'sport', 'surface', 'toll', 'tourism', 'tower:type', 
'tracktype', 'tunnel', 'water', 'waterway', 'wetland', 'width', 'wood', 'z_order', 
'way_area', 'tags'] 

Do you want to investigate single field (single)? Generate xml 
file (xml)? Or both (both)? single

Enter field name to investigate: landuse

So you can see all these different fields. I chose to look at a single field (“landuse”) and the program will then look at the “landuse” record value for each shape, record its data type and save new record values:

searching for non-empty entry for landuse ...
data type found: str
Finding unique record values for landuse
1 of 212550 shapes ( 0.0 % )
 new record value: 
93 of 212550 shapes ( 0.04 % )
 new record value: reservoir
6782 of 212550 shapes ( 3.19 % )
 new record value: residential
110432 of 212550 shapes ( 51.95 % )
 new record value: grass
111094 of 212550 shapes ( 52.26 % )
 new record value: construction
Completed field name inspection 

---------------------------------------
Shapefile has the following field names
['osm_id', 'access', 'aerialway', 'aeroway', 'amenity', 'area', 
'barrier', 'bicycle', 'brand', 'bridge', 'boundary', 'building', 
'covered', 'culvert', 'cutting', 'disused', 'embankment', 'foot', 
'harbour', 'highway', 'historic', 'horse', 'junction', 'landuse', 
'layer', 'leisure', 'lock', 'man_made', 'military', 'motorcar', 
'name', 'natural', 'oneway', 'operator', 'population', 'power', 
'place', 'railway', 'ref', 'religion', 'route', 'service', 'shop', 
'sport', 'surface', 'toll', 'tourism', 'tower:type', 'tracktype', 
'tunnel', 'water', 'waterway', 'wetland', 'width', 'wood', 'z_order', 
'way_area', 'tags']

The field name landuse is str
and has 5 unique values
Display Values? (Y/N) Y
 possible values:
['', 'reservoir', 'residential', 'grass', 'construction']

As you can see from the output, there were 4 keywords (reservoir, residential, grass and construction) used to describe the ‘landuse’ field. So I could now write some code to go into a shapefile and extract only the shapes that have a ‘residential’ value for ‘landuse.’ But I couldn’t do that until I (1) knew that the landuse field existed and (2) knew the different definitions for landuse type.

So there it is! That’s the program. Hopefully all the shapefiles you ever download will be well-documented. But if you find one that’s not and you really need to figure it out, this little tool might help!

Some code notes and tips

The xml file that I create didn’t follow any particular standard or convention, just what I thought might be useful. Perhaps that could be improved?

REMEMBER THAT IN PYTHON, YOU NEED TO EXPLICITLY COPY LISTS! I stupidly forgot that when you make a list

list_a = list()
list_a.append('blah')
list_a.append('d')

And then want to make a copy of the list, if you do this:

list_b = list_a

Then any changes to list_b will change list_a. But if you do

list_b = list_a[:]

You’ll get a new copy that won’t reference back to list_a. This is probably one of the things that I forget most frequently with Python lists. Palm-smack-to-forehead. 

The XML API ElementTree was pretty great to work with. You can very easily define a hierarchy that will produce a nice xml tree (see this example). I did, however, have some trouble parsing the direct output from the type() function. When you calculate a type,

type(0.01)

you get this:

<type 'float'>

When I gave it directly to ElementTree (imported as ET here), like this:

ET.SubElement(attr, "attrtype",name="data type").text = type(0.01)

I would get some errors because of the quotation marks enclosed. To get around this, I converted the type output to a string, split it up by the quotes and took the index that would just be the type (int, str, or float):

ET.SubElement(attr, "attrtype",name="data type").text = str(type(0.01)).split("'")[1]

Mapping some things!

Since publishing my series of posts on manipulating shapefiles in Python (1, 2 and 3), I’ve been exploring different open data catalogs so I thought I’d share some of the maps I’ve mapped! All these were produced using scripts in my learning_shapefiles repository on GitHub (see denver_stack.py and denver_tree_canopy.py).

Downtown Denver

denver_downtown

MapZen is a pretty sweet service! You can draw a regional box then the folks at MapZen will take that box and extract all the OpenStreetMap data within the box then give you the shapefiles with all that info. First thing I (stupidly) did after downloading my data extraction from MapZen was to just plot all of the shapes… and I then proceeded to sit around for quite some time while my laptop chugged away and produced a highly detailed map of all the things in Denver County. I zoomed in to downtown Denver for the above image to show off the detail.

Metro-Denver

denver_road_rivers

A more abstract representation of the primary roadways in metro Denver. I figured out how the MapZen/OpenStreetMap shapefile was organized and only plotted motorways, primary and secondary roads (bright white in the map). I also created a grid containing the shortest distance to a roadway (using  the distance method available for shapely.geometry.LineString) and contoured the inverse distance (1/x) to evoke the topographic contours along rivers.

Tree Cover in Denver County

denver_1

Denver’s Open Data Catalog has a bunch of databases with shapefiles galore. I downloaded one for the Tree Canopy and then plotted up the percent tree cover in each geometry. This is what actually lead me to learn how to plot roadways… and here I overlaid the tree cover on the same MapZen extraction of OpenStreeMap data. Along with the roadways underneath, it forms a sort of abstract tree.

So that’s it for today. Three maps from a couple different open data sources using some inefficient Python. Not going to go into detail on the code this time, because, well, it’s slooow. I’m using modifications of the simple scripts that I used in my shapefile tutorials because I was excited to just plot things! But there are much better ways to handle shapefiles with hundreds of thousands of geometries. So until next time (after I figure out how to use fiona or geopandas), just enjoy the visuals!