Elijah Robison | GIS Blog

A scrapbook of GIS tricks, with emphasis on FOSS4G.

Terminal one-liner to create shapefile from WKT using ogr2ogr

without comments

Fair warning, this is a Linux-themed solution. But I expect it could be ported to Windows without too much trouble.

Today I needed to create a shapefile with a single point geometry in it to use as an input for a random GIS utility executable, which required its input to be a an actual shapefile rather than an array of coordinate pairs, or even a single coordinate pair, or even separate x=/y= input parameters for a single coordinate pair. I already had the point coordinate I wanted to use, and I didn’t want to go to the trouble to make this shapefile! So I got to wondering if I could use ogr2ogr to render a simple shapefile from the terminal.

Fortunately, with the help of some utilities built in to my Ubuntu shell, I was able to come up with the following solution. This compound command 1) creates an empty file (dataset.csv), 2) pushes two features into it, and 3) uses ogr2ogr to translate the CSV into a shapefile projected accordingly–in this case, to EPSG:4326.

Here’s the full command. Note that a pair of ampersands (&&) are used to daisy-chain the individual steps together into a single instruction:

touch dataset.csv && printf "gid,WKT,some_value\n1,POINT(-82.048051 33.567181),Alpha\n2,POINT(-92.7774 35.9829),Bravo\n" > dataset.csv && ogr2ogr -f "ESRI Shapefile" dataset.shp -dialect sqlite -sql "SELECT gid, GeomFromText(WKT), some_value FROM dataset" dataset.csv -a_srs EPSG:4326

Here’s a breakdown of what happens..

touch dataset.csv
  • Create a file (in the current directory) named “dataset.csv”

printf "gid,WKT,some_value\n1,POINT(-82.048051 33.567181),Alpha\n2,POINT(-92.7774 35.9829),Bravo\n" > dataset.csv
  • Push a string through standard output (STDOUT) and INTO (>) the new file “dataset.csv”
  • The string imposes line breaks using the “\n” sequence and establishes a CSV type containing three lines
  • The three lines of content include: a column header, feature 1, and feature 2

ogr2ogr -f "ESRI Shapefile" dataset.shp -dialect sqlite -sql "SELECT gid, GeomFromText(WKT), some_value FROM dataset" dataset.csv -a_srs EPSG:4326
  • The ogr2ogr command exports a shapefile, “dataset.shp”
  • The input file, “dataset.csv” is included toward the end, along with the declaration of the source data’s CRS (EPSG:4326/WGS84)
  • ogr2ogr is instructed to use the “SQLITE” dialect of sql to pull data from the target dataset
  • The SQL command makes sure ogr2ogr knows to identify the WKT column as the source data’s geometry field

It runs pretty quickly, and does exactly what I wanted: Quickly crank out a shapefile if all I have is a point or a polygon or something, already in WKT format. I’m sure I’ll be using this again, so I wanted to document it here for easy reference later. Hopefully someone else finds it useful!

Written by elrobis

July 16th, 2020 at 5:04 pm

CentOS 7 and PostGRESql: Move the PostGRESql data_directory

with 4 comments

Long time no see. Scroll down for the answer, or start here for the story..

Today, by happenstance while testing a backup/restore strategy, my coworker discovered that our CentOS 7 installation from ISO/DVD used a default disk partitioning strategy that allocated a majority of disk space to the /home mountpoint, and a dearth of space to /, the root mountpoint. In our example, this meant about 2.1 TB available to the home mount and appx 50 GB to the root mount.

Meanwhile, our default PostGRESql 9.x install on this platform established the database instance to a location on the root mountpoint, which naturally limited it to 50 GB of consumption. While 50 GB might be enough for our purposes relative to the database instance, there is some danger in that, if it eventually consumed all 50 GB, it could potentially hose the system. Believe it or not, but it’s possible for log files to burgeon to huge values like this, particularly on systems that get stood up for simple missions and run for a decade trouble-free, all the while their log files collecting information about every db query, every web request, every user login, etc.

This system is not yet in production, so one option available to us was to start entirely from scratch and customize the disk partitioning so more space would be available to the database, and who knows what else. We considered resizing the partitions, stealing some space from the rich home mount and giving it to the less affluent root mount, but that option looked like a mess we didn’t want to step in. So ultimately we decided to try moving the database instance to the preferred mountpoint. But, having reached this decision, and being generally sure of what needed to happen for it to work, we didn’t find a definitive start to finish “how to” that addressed our confusions or a few of the gotchas we ran into. I’ll refrain from discussing what we expected and why we expected it, or a few missteps we made in the process, because in the end that’s just brain pollution.

While I suppose this approach works for most CentOS 7.x and PostGRESql 9.x combos, since versions, repositories, and packager managers are things, your mileage may vary.

No warranty implied! 

If you follow these instructions to the letter, you should be able to undo everything by working backwards, which is why we kept the original data directory intact, only renamed. But tread very carefully if you’re considering moving a production database.


CentOS 7 and PostGRESql: How to move the PostGRESql data_directory:

1. To get started, you need your current data_directory value. If you don’t know it you can get it from your database client (pgAdmin, DBeaver, etc) by connecting with the postgres user and querying:

SHOW data_directory;

Alternatively, you can get it from psql, here’s how that looks in my case:

cd /home
su postgres
show data_directory;
## /var/lib/pgsql/9.5/data


It probably goes without saying, but everywhere you see 9.5, your service names and file paths may be a little different, so be aware to adjust accordingly.


2. Next, create a new home for the PostGRESql instance. You’ll want to stop the database to release any locks. We created a location at /home/data/pg_data, like this.

service postgresql-9.5 stop		## Stop the database.
cd /home				## Make your new containing directory structure..
mkdir data
cd data
mkdir pg_data
chown -R postgres:postgres pg_data	## Establish postgres as the owner of the new directory.


3. Now deep-copy the contents of your original data directory to the new location, then re-name your original data directory so that when you restart the instance, you’ll know for certain it’s working out of the new location. Here’s how:

cd /var/lib/pgsql/9.5/data		## Browse into the original data dir
su postgres				## Switch to the postgres user for permission safety
cp -a * /home/data/pg_data/		## Make an archive copy (-a) of the contents, preserving permissions and symlinks
cd ..					## Browse back up..
mv data/ ORIGDATADIR			## And rename (mv) the original data folder


4. Now PostGRESql is moved, but you have to correct how the system instantiates its service interface. We surmised this from the ServerFault question asked here, and this is how we did it.

Note that nomenclature like this [i], or this [ESC], [:], etc.. means literally press those keys to communicate with the VI interface. Hopefully that makes sense. Also, please see the WARNING discussed in the closing thoughts.

cd /usr/lib/systemd/system/			## Browse here..
vi postgresql-9.5.service			## Open the postgres service file in VI
[i]						## Toggle INSERT mode in VI by hitting "i"
#Environment=PGDATA=/var/lib/pgsql/9.5/data/	## Find and comment-out this line with a pound sign (#)
Environment=PGDATA=/home/data/pg_data/		## And replace it with a line using your new data_directory path
[ESC]:x!					## Finally save your edits and close the file.


5. In the final step, you need to impose a change on SELinux as described here (lines 2 and 3), ensure proper access permissions on the new data_directory location (lines 4 and 5), force reconfigure your service instantiation manager ..I think, and then restart the PostGRESql instance.

cd /home/data
semanage fcontext --add --equal /var/lib/pgsql /home/data/pg_data
restorecon -rv /home/data/pg_data
chmod -R 700 /home/data/pg_data
chown -R postgres /home/data/pg_data
systemctl daemon-reload
service postgresql-9.5 start


That should do it—the start command should bring your database back online, operating out of a new location. If it fails to start, you might learn something about the problem by running journalctl -xe, which we did a few times while troubleshooting the approach.

WARNING. As alluded to in Step 4, when you open the open the postgresql service file, notice the counsel at the top of the file instructing you not to modify the file, but to create a different file elsewhere, in /etc, and modify it, because changes to the systemd/system file will be overwritten in the event of a package upgrade or something. We couldn’t get that to work for some reason, so we made good notes in our system documentation and hope to keep this concern in mind this going forward. I encourage you to pursue the recommended approach, but at the very least be mindful of this potential reversal down the road and any poor soul who may be following in your administrative footsteps without your knowledge of the problem! At minimum put a descriptive README file in a few key locations, like the root user’s home directory and the database’s default installation path.

Looking back, there are a few changes to this approach that would save a few steps, but this worked for us so I decided to post it as-built. (i.e. Rather than create pg_data, then cp -a the contents of data, just cp -a the entire data directory, which might avoid the chmod and chown steps later on..). I may decide to go back and modify the postgresql-9.5.service startup to use the development-recommended approach, in which case I’ll hopefully update this post, but please don’t hesitate to leave a comment if you know how to get that working.

Written by elrobis

May 21st, 2019 at 3:36 pm

Posted in CentOS 7,PostgreSQL

Socket-served Internet mapping, anyone?

with 6 comments

I may or may not be on to something.

For awhile I’ve been growing more and more tired of the typical Internet mapping user experience. It’s basically this: open a map and zoom/pan somewhere, wait a moment while (hopefully) some “working” graphic gives you a clue that the map hasn’t forgot about you; meanwhile, some basemap tiles load, then after a moment or two, “poof!” all the interactive data (i.e. the sole reason you are using that particular map) appears all at once.

Fortunately, in most cases, the basemap tiles “sprinkle in” one by one as they become available, which does give the user an impression that something is happening. But more often than not, basemap tiles dramatically out-perform the vector data component, and the overall feel of the map becomes this kind of “clicky kludgy .. ..CLUNK”, as the tiles pepper in, followed by the hammer that is all the vector data landing at once.

So I’ve always looked for ways to streamline the vector IO as much as possible–1) Maybe it’s possible to simplify the data a little, removing redundant vertices? 2) Maybe it’s possible to return that data with less coordinate precision (i.e. the number of decimal places in the coordinate values), because a precision above 8 probably isn’t even legit. 3) Is the database using it’s geometry index, or is it possible to better-tune the DB so that it handles the Data Access step more athletically? 4) ..what about pre-cacheing the entire data table into RAM so the DB never has to hit disk to serve a query on that table? ….that’s pretty much the request/access step, but what about 5) returning the data in the leanest possible format–for example, GML is much heavier than GeoJSON. Or, 7) what about returning a home-rolled JSON response using Well Known Text (WKT) and just a few important fields, rather than providing the entire record for every feature?

I feel like I’ve tried everything I know to try, and I just cant serve a deep table of polygons (specifically, parcel polygons) to go through all the motions as fast as I’d like. At the end of the day, the choke-point is the HTTP protocol, itself, and the very client <–> server paradigm that represents almost 100% of the static web. (By static web, I mean basic websites where you go to fetch assets, read/learn things, watch things, etc. The Dynamic web would be things like video games, chat rooms, Skype, and stuff like that.)

That got me to thinking, maybe an answer lives somewhere in the Dynamic web. A major service-level component of the dynamic web is the “Socket”, and they’ve been around for awhile. The original AOL chat rooms used sockets, as did instant messaging apps. Socket’s allow servers to “push” data into listening clients. This is why you don’t have to click a refresh button to see if your friend ever responded. The response just appears because it’s pushed in.

For this reason, I wanted to explore the idea of a socket-served web map. I thought of it as being a chat-like interaction. The  map says “hey I’m over here!” Then the server gets any relevant data and just pushes it right into the map. Most of the get/access-level optimizations I described above apply equally here. However a win in user experience *may* come from the fact that each individual feature—much like the basemap tiles—can be drawn as it becomes available to the client. This removes a huge choke point imposed by the HTTP protocol, specifically waiting for the browser to receive the entire payload of data, parse it, and render it—all before you see the first feature. Instead, features are drawn one-by-one.

My theory feels good, and the approach seems to work. At the very least it’s doing what I hoped. But I have yet to test in a deployed scenario. Right now it’s all on my development machine. But it looks pretty cool in action! check it out below.

If anyone is interested, I’ll share the code. Otherwise, I’ll make it available after I get it a little further along and prove to myself it works well once deployed.

..to be continued

– elrobis

Written by elrobis

March 17th, 2015 at 4:56 pm

Posted in Uncategorized

Q: What’s The best way to convert CDs to MP3s in Windows? A: VLC and the Command Line. Here I’ll show ya..

with 13 comments

Today my friend (shout out to Nic Zamora) wanted to know if I still had a recording we made of a song he co-wrote way back in ancient history ..like 2007 or something. Gah..

Well it got me to thinking—my other buddy (shout out to Corey Nolan) had a similar request awhile back for a recording we did with his brother, Dustin, around the same time. The problem was, the only recording I had of the latter was on CD. Since then, that one simple deterrent—being on CD—was solely responsible for preventing me from ever “getting around to it”, like I said I would.

The reason I hate ripping CDs is because it’s such a ridiculous process ..first you have to open some asinine software (and God help you if it’s Windows Media Player), click around like a fool, be confused, drag files and/or locate folders, agonize over a variety of settings, many of which you, A) may not know anything about, 2) remember from the last time you did this resulting in inconsistently ripped music, and D) ..worst of all, potentially but unintentionally introduce some weak-sauce copy protection.

There had to be a better way.

Caveat—since writing this post, I’ve discovered that CDs containing extra material, like music videos, etc., unfortunately don’t play well with this program. I may return and solve that problem in the future, but with only one CD affected so far, it’s not a high priority.

As a GIS software developer, I use ALOT of open source tools, especially when I need to bend data out of one format and into another—usually from flat files like shapefile into some database instance, like PostgreSQL or MySQL. The best tool for this, of course, is the ogr2ogr command line utility in the GDAL/OGR suite. It’s the best for many reasons. But in this context, it’s the best because everything you want to do is done over the command line. No clicking, no dragging. No BS. Just type the “instruction” you want to execute into a command line window, run it, and you’re done. Simple.

So this morning I had enough and finally asked the obvious question—why can’t I convert music over the command line as easily as I can bend and twist GIS data formats??

It turns out, I can! Using the VLC Media Player, which provides a command line interface..

First, the Google took me to where someone already asked a similar question on Stack Overflow. And after toying around with a batch file provided by a helpful SO user, I tweaked it a little so that I could reuse the file more easily—specifically by passing in the path to any folder where I wanted new mp3 tracks to be saved into.

And its wicked simple. In the time I’ve spent so far writing this post, I’ve already used this solution to rip seven different CDs—Chopper One, ACDC live, Social Distortion, DGC Rarities, Robert Bradley’s Blackwater Surprise, Nerf Herder, and ..yep, the song my friend asked for. 🙂

Oh, and I’ll probably have 8 or 10 CD’s ripped by the time I publish. I love open source software.. LOVE it.

Now that I’m editing the post a little while later—make that 14 albums exported ..going on 15. 8)

So without additional storytelling, here’s what you have to do to rip your CDs to MP3 like a GIS jedi..

1. Install VLC.

When VLC installs, make a note of the directory VLC installs to. It should be something like

C:\Program Files\VideoLAN\.. or maybe


You’ll need this later in Step 5.


2. Create some directory near your drive root that DOES NOT HAVE SPACES in it. Like C:\MP3s

*Note: From here on, let’s assume you won’t put any spaces in any file paths.*


3. Open Notepad. Start > Run > type “notepad” without quotes > ::hit Enter::


4. Copy and paste the code between these lines into your notepad window..




set out_path=%1

:: Set the path to the VLC installation on your system.
:: Make sure you see the file "vlc.exe"
set vlc_path=C:\Program Files\VideoLAN\VLC\vlc

:: Add a trailing backslash if necessary..
if defined out_path if not "%out_path:~-1%"=="\" set out_path=%out_path%\

:: Create the output directory, if necessary..
if not exist %out_path% (
echo Creating output directory..
mkdir %out_path%
) else (
echo Output directory already exists, reusing it..

echo Exporting to "%out_path%" ..

SET /a x=0

:: E: represents your CD player's letter drive. You may need to change this to D:
FOR /R E:\ %%G IN (*.cda) DO (CALL :SUB_VLC "%%G")
GOTO :eof

call SET /a x=x+1
ECHO Transcoding %1
SET out_track=%out_path%Track!x!.mp3
ECHO Output to "%out_track%"
ECHO exporting .. ..

:: Here's where the actual transcoding/conversion happens. The next line
:: fires off a command to VLC.exe with the relevant arguments:
:: E: represents your CD player's letter drive. You may need to change this to D:
CALL "%vlc_path%" -I http cdda:///E:/ --cdda-track=!x! :sout=#transcode{vcodec=none,acodec=mp3,ab=128,channels=2,samplerate=44100}:std{access="file",mux=raw,dst="%out_track%"} vlc://quit



5. Depending on your version of Windows, you may need to edit the path to your VLC installation.

Look for “IMPORTANT!!!!” in the code you just pasted. This is where you set the path to VLC.

Back in Step 1 I asked you to make note of the VLC installation path. My home system runs Windows XP 32 bit (yes, still), which means VLC installed to..

C:\Program Files\VideoLAN\VLC\vlc

—see where that path is declared in the file? YOU, however, are probably running Windows 7 ..or maybe 8, and if that’s the case, I think you need to change the VLC path to..


..which you should confirm before you continue. Try opening or browsing into that directory and make sure you see the file, vlc.exe. If you see that file, you’re good to advance.


6. Similarly, you may need to edit the code to use the correct letter drive for your CD player. On my system, it’s the E: drive, but on your system, it’s probably the D: drive.. So if changing this value is necessary, find the two remaining “IMPORTANT!!!!” comments in the code and change E: to the letter drive for your CD player.


7. Save the file as a so-called Batch file (.bat).

In Notepad, select File > Save as..

Make sure you change “Save as type” to “All Files”. Then browse into the C:\MP3s folder you created in Step 2 and save the file as “_rip-cd.bat”, which starts with an underscore, “_”.

Frankly, you can name the file whatever you want, but the underscore should cause the file to always appear above or below anything else in this folder, which I find helpful.

Make sure that the file REALLY ends in the .bat extension, and not .txt. If this is correct, your file icon should look like a little gear, like in the image above. If you’re having trouble with this, look here for some help.


8. Now it’s time to put your hard work to use, so pop in a CD and open a fresh command window (a.k.a. the terminal).

Start > Run > type “cmd” without quotes > ::hit Enter::

In the terminal, type the full path to your batch file. If you stuck to the example, that would be..


..follow it with a space, and then a path on your system to where you want to save the exported MP3s. So your full command might look like below. (FYI: “TREoS” is a band, and “Between the Heart and the Synapse” is an album.)

C:\MP3s\_rip-cd.bat C:\MP3s\TREoS\BetweenTheHeartAndTheSynapse

You do NOT have to create the export folder first, the script will create it for you automatically.

..in other words, if you have multiple albums by the same artist, you’ll want to save them in different folders. This is important, because the program can’t name the songs for you, so they’ll be created with names like Track1.mp3, Track2.mp3, etc., and if you try to export another album into the same directory, it will probably fail and complain when it tries to save another Track1.mp3 into a folder already containing a Track1.mp3. Hopefully this makes sense.

Here’s what it looks like when you fire it up and run it. Just so you know what you’re seeing, the part I actually typed was

D:\x_Rip\_rip-cd.bat D:\x_Rip\STP\Purple

(yeah ..I’m using different drive letters and a different folder path, but hopefully you get the gist.)

That’s all there is to it. Using this approach, it’s ultra-painless to export a CD to MP3s. As a bonus, you’ll export the same way every time, never having to worry about required or optional configurations, or some pesky copyright protection that some software try to impose.



Written by elrobis

March 7th, 2015 at 5:37 pm

Posted in Uncategorized

Gabriel Weinberg’s PostgreSQL Tips and Tricks

without comments

Here’s a solid article on tuning your PostgreSQL instance and optimizing your indexes.

Also, here’s a nice primer on creating and managing indexes in PostgreSQL.

I wanted to share ’em now and save ’em for later.


Written by elrobis

January 22nd, 2015 at 2:20 pm

Posted in Uncategorized

ArcPy CalculateField_management Complex Python Expression to Return a String

without comments

My coworker was having some trouble implementing a CalculateField expression in Python so I wanted to document the solution for anyone with a similar issue. (Note: If you  just want the answer, see the What ultimately worked.. heading, below.)

Being unfamiliar with ArcPy, I was intrigued by this idea of defining a function as a string value, then passing that string as a variable into another function. I can see how that opens a powerful door—but it also strikes me as ultra weird, and because Python has some very particular spacing/indentation rules (as compared to say, JavaScript), I figured this technique would be ultra-prone to syntax issues ..and thus could be especially difficult to troubleshoot. So in this case, I also wanted to demonstrate how I ultimately thought through the problem.

After some quick Googling, we found two relatively useful pages of documentation in the ESRI support ecosystem, and both pages demonstrated different ways to formulate the function-as-string parameter necessary for the CalculateField_management() function.

The biggest difference between these examples is how they portrayed line breaks in the function-as-string; we’ll consider them first.


The first example..

In the first ESRI example, the function-as-string was concatenated together across multiple lines using a backslash \ like this..

codeblock = "def getclass(area):\ 
        if area <= 1000:\ 
            return 1\ 
        if area > 1000 and area <= 10000:\ 
            return 2\ 
            return 3" 

print codeblock

Realizing Python is very picky about syntax, I wanted to see what happened if I print codeblock  to the console–notice that spacing is preserved, but there are no line breaks:

Frankly, it’s difficult for me to believe this was ever a working example, and surprise—this approach didn’t end up working for us..


And the second example..

In the second ESRI example, the function-as-string was concatenated together across multiple lines using triple-quotes """ to declare a multiline string, like this..

codeblock = """def getClass(area):
    if area <= 1000:
        return 1
    if area > 1000 and area <= 10000:
        return 2
        return 3"""

print codeblock

Similarly, I wanted to codeblock to the terminal to see how the string was formatted before going into the ArcPy function; this time, the linebreaks were preserved:

..that looks more like a Python function, so that was a good start. But things still weren’t working..


How to test our custom function..?

We were still getting errors, and the stack dump claimed they were syntax errors. But because of the unusual nature of this software design, I wasn’t sure if they were really syntax errors, or some kind of misleading, catch-all error ..perhaps our code-in-a-string function was flawed?

So my next step was to test our function, as a legit Python function, to see if it would accept and return a value like I expected.

def getClass(area):
    if area <= 1000:
        return 1
    if area > 1000 and area <= 10000:
        return 2
        return 3

print getClass(1500)

As you can see, the code ran without any syntax errors..

Also, 2 is the correct return value for the input we used in the test. So I felt comfortable that the function itself—as written—worked. So now I was perplexed as to why the ESRI CalculateField_management() function was rejecting it ..with a syntax error.


What ultimately worked..

My co-worker’s function had another gotcha—unlike the two ESRI examples, which return numeric values, we needed our function to return a string. My Python was rusty, and I was forgetting if a string could be declared equally between single quotes i.e. 'MyString' and double quotes, i.e.  "MyString". Plus, I still new we were looking for an alleged syntax error.

I finally Googled a third page in the ESRI documentation and noticed this little tidbit in some ESRI documentation:

Python enforces indentation as part of the syntax. Use two or four spaces to define each logical level. Align the beginning and end of statement blocks, and be consistent.

My coworker had originally used one and two spaces, respectively, to control indentation in his custom function, along with the first technique for declaring a multiline string. So I finally rewrote our function using the triple-quotes technique as well as 2 and 4 space indents like ESRI said to, and this is what we ended up with..

For the record, while a large if/elseif block may not be the best way to write this function, it is a very readable way to write this function, not to mention the most intuitive from my coworker’s perspective.

codeblock = """def getclass(SPEEDLIM):
  if SPEEDLIM >= 60:
    return 'A10'
  elif SPEEDLIM == 55:
    return 'A30'
  elif SPEEDLIM == 50:
    return 'A20'
  elif SPEEDLIM == 45:
    return 'A30'
  elif SPEEDLIM == 40:
    return 'A30'
  elif SPEEDLIM == 35:
    return 'A40'
  elif SPEEDLIM == 30:
    return 'A00'
  elif SPEEDLIM == 20:
    return 'A00'
  elif SPEEDLIM == 15:
    return 'A00'
  elif SPEEDLIM == 10:
    return 'A61'
  elif SPEEDLIM == 5:
    return 'A62'
  elif SPEEDLIM <= 5:
    return 'A71'"""

And this worked. I think the fix resulted from using triple-quotes to declare the function string, rather than the backslash approach we started with. While we also changed the indentation to use 2/4-spaces, as mentioned above, it’s possible we unwittingly fixed a different syntax error in the function along the way. Since we didn’t make these changes independently, it’s impossible to know which exactly made the difference.



So this is a summary that might help you if you’re having difficulty with this kind of ArcPy data management exercise:

  • Try writing a basic version of your function in a throw-away .py file and test it to make sure it definitely works—if nothing else, this is a good sanity check.
  • Make sure to use the triple-quote technique to properly preserve line breaks, and declare your function as a string variable that you can pass into your call to the geoprocessor. (You may need to use single-quotes to declare any strings within your larger, multiline string, which was the technique we used.)
  • If things still aren’t working, try using increments of 2 spaces to control various levels of indentation within your custom function, i.e. 2/4/6/8/etc..

Hopefully that helps you get where you’re trying to go.


– elrobis

Written by elrobis

November 25th, 2014 at 3:50 pm

Posted in ArcPy,Conversion,Python

Create UTFGrid Tiles from PostGIS Tables

without comments

I assume you’re where I was about a week ago. That is, you’ve heard of UTFGrid, and now you want to render your own UTFGrid tiles. Perhaps you found yourself looking at this thread over at GIS.SE, but you don’t want to jump into TileStache just now. Anyway if you’ve got a working installation of GDAL/OGR and Mapnik 2+, complete with Python bindings, I’ll show you what worked for me..

Because this is merely an adaptation of Matthew Perry’s original solution, I highly recommend considering his original blog entry on the topic, complete with discussion points and caveats, before proceeding further!

Once you’re ready, head over to GitHub and download some code, specifically globalmaptiles.py, create_tile_shp.py, and create_utfgrids.py. [1] You don’t actually need create_tile_shp.py, but I snagged all three. To keep things simple put these files in the same folder.

Next, in that same directory, create a new Python file, I called mine createUtfgridsFromPG.py.

Where create_utfgrids.py works entirely on shapefiles, createUtfgridsFromPG.py accepts an OGR PostgreSQL connection string in place of a shapefile path. Fortunately the original OGR code didn’t change, but to use the Mapnik PostGIS driver I had to iterate over the PostgreSQL connection string and store the connection parameters so I could supply them a differently to Mapnik.

Finally, copy the following code and paste it into your new file. It’s basically the same as create_utfgrids.py, but I changed shppath to pgconn and added some code to use a mapnik.PostGIS() datasource in place of a mapnik.Shapefile() datasource.

If you’re curious, the comment # ELR 2014.9.26: flags the few places I changed the original code.


#!/usr/bin/env python
# -*- coding: utf-8  -*-
Author: Matthew Perry
License: BSD

Creates utfgrid .json tiles for the given polygon shapefile

Thx to Dane Springmeyer for the utfgrid spec and mapnik rendering code
and to  Klokan Petr Přidal for his MapTiler code

import globalmaptiles
import mapnik
import ogr
import os
from optparse import OptionParser, OptionError
    import simplejson as json
except ImportError:
    import json

def create_utfgrids(pgconn, minzoom, maxzoom, outdir, fields=None, layernum=0):

    # ELR 2014.9.26:
    # Original implementation pushed in a shapefile path.
    #ds = ogr.Open(shppath)
    ds = ogr.Open(pgconn)

    # ELR 2014.9.26:
    # Iterate over the PostgreSQL connection string and pull out values we need
    # to use Mapnik's PostGIS datasource constructor.
    pgConnARR = pgconn[3:].split(' ')
    for kvPair in pgConnARR:
        if kvPair.split('=')[0] == "host":
            nikHost = kvPair.split('=')[1]
        if kvPair.split('=')[0] == "port":
            nikPort = kvPair.split('=')[1]
        if kvPair.split('=')[0] == "user":
            nikUser = kvPair.split('=')[1]
        if kvPair.split('=')[0] == "password":
            nikPass = kvPair.split('=')[1]
        if kvPair.split('=')[0] == "dbname":
            nikDB = kvPair.split('=')[1]
        if kvPair.split('=')[0] == "tables":
            nikTable = kvPair.split('=')[1]

    print "WARNING:"
    print " This script assumes a polygon shapefile in spherical mercator projection."
    print " If any of these assumptions are not true, don't count on the results!"
    # TODO confirm polygons
    # TODO confirm mercator
    # TODO get layernum from command line
    layer = ds.GetLayer(layernum)
    bbox = layer.GetExtent()
    print ""
    print str(bbox)

    mercator = globalmaptiles.GlobalMercator()

    m = mapnik.Map(256,256)

    # Since grids are `rendered` they need a style
    s = mapnik.Style()
    r = mapnik.Rule()
    polygon_symbolizer = mapnik.PolygonSymbolizer(mapnik.Color('#f2eff9'))
    line_symbolizer = mapnik.LineSymbolizer(mapnik.Color('rgb(50%,50%,50%)'),0.1)
    m.append_style('My Style',s)

    print ""
    # ELR 2014.9.26:
    # Original implementation using shapefile..
    #ds = mapnik.Shapefile(file=shppath)

    # ELR 2014.9.26:
    # Parameterized PostGIS implementation..
    ds = mapnik.PostGIS(host=nikHost,port=nikPort,user=nikUser,password=nikPass,dbname=nikDB,table=nikTable)

    mlayer = mapnik.Layer('poly')
    mlayer.datasource = ds
    mlayer.styles.append('My Style')

    print ""
    if fields is None:
        fields = mlayer.datasource.fields()
        print "Fields were NONE. Using.."
        print fields
        print "Fields are USER PROVIDED. Using.."
        print fields
    print ""

    for tz in range(minzoom, maxzoom+1):
        print " * Processing Zoom Level %s" % tz
        tminx, tminy = mercator.MetersToTile( bbox[0], bbox[2], tz)
        tmaxx, tmaxy = mercator.MetersToTile( bbox[1], bbox[3], tz)
        for ty in range(tminy, tmaxy+1):
            for tx in range(tminx, tmaxx+1):
                output = os.path.join(outdir, str(tz), str(tx))
                if not os.path.exists(output):

                # Use top origin tile scheme (like OSM or GMaps)
                # TODO support option for TMS bottom origin scheme (ie opt to not invert)
                ymax = 1 << tz;
                invert_ty = ymax - ty - 1;

                tilefilename = os.path.join(output, "%s.json" % invert_ty) # ty for TMS bottom origin
                tilebounds = mercator.TileBounds( tx, ty, tz)
                #print tilefilename, tilebounds

                box = mapnik.Box2d(*tilebounds)
                grid = mapnik.Grid(m.width,m.height)
                utfgrid = grid.encode('utf',resolution=4)
                with open(tilefilename, 'w') as file:

if __name__ == "__main__":
    usage = "usage: %prog [options] shapefile minzoom maxzoom output_directory"
    parser = OptionParser(usage)
    parser.add_option("-f", '--fields', dest="fields", help="Comma-seperated list of fields; default is all")
    (options, args) = parser.parse_args()

    if len(args) != 4:
        parser.error("Incorrect number of arguments")

    pgconn = args[0]
    minzoom, maxzoom = int(args[1]), int(args[2])
    outdir = args[3]

    if os.path.exists(outdir):
        parser.error("output directory exists already")

    if options.fields:
        fields = options.fields.split(",")
        fields = None

    create_utfgrids(pgconn, minzoom, maxzoom, outdir, fields)


Once you’ve prepared createUtfgridsFromPG.py, you can call it from the command line like this..

C:\xDev\utfgrids\createUtfgridsFromPG.py "PG:host= port=5432 user=postgres dbname=gis password=passw0rd tables=parcels_pmerc" 12 16 "C:/xGIS/tiles/utf" -f tms,owner_name

  • Hopefully the PostgreSQL connection string ("PG:host=..") makes sense.
  • 12 and 16 represent the minimum and maximum zoom levels to be rendered, respectively.
  • The directory “C:/xGIS/tiles/utf” is where your UTFGrid tiles will be saved.
  • And -f tms,owner_name,the_wkt represents a comma-separated list of data fields you want in your UTFGrid.


  • Both create_utfgrids.py and createUtfgridsFromPG.py require your geodata table to be in a Web Mercator projection (EPSG:3857)!
  • The script assumes a top-origin tile scheme, like OSM and others.
  • The script will only work with polygons.
  • While the OGR PostgreSQL connection string has a tables parameter, this implementation will only accept one table.
  • The script will create your target directory, in the example case, utf, and it will throw an error if you create this directory in advance.

[1] Many thanks to Matthew Perry, Klokan Petr Přidal, and Dane Springmeyer for their collective efforts and for sharing their work.

Written by elrobis

September 26th, 2014 at 1:26 pm

OGR VRT: Connect to PostGIS DataSource

without comments

I needed an OGR VRT for something and didn’t find a clear example on the web all in one place, so here goes.

Somewhere on your system, create a new file with a .ovf extension. Inside that file, add some XML like the following to define your PostgreSQL connection:

That name=”WKTGrid” is semantically unrelated here. I have been experimenting with including WKT geometry data in UtfGrid tiles, and that name is relative to my experiments. You can provide most any value for name. However, do note that the layer name is referenced in the ogrinfo command.

  <OGRVRTLayer name="WKTGrid">
    <SrcDataSource>PG:host= user=postgres dbname=gis password=l00per</SrcDataSource>
    <SrcSQL>SELECT tms, owner_name, the_wkt FROM parcels_cama_20140829_pmerc</SrcSQL>

  • OGRVRTLayer The layer name attribute you assign can be anything you want.
  • SrcDataSource The data source value defines your PostgreSQL connection parameters.
  • SrcLayer The source layer identifies the target table in your PostgreSQL instance.
  • SrcSQL [Optional] OGR SQL can be used to target specific fields, define field aliases, and even refine the data set using WHERE clauses, etc.

After you make a VRT, it’s smart to test it in ogrinfo before you use it for anything serious. It’s easy to test a VRT in ogrinfo, and if ogrinfo makes sense of it, then you know you’ve got a good VRT.

A command like this uses ogrinfo and OGR_SQL to open the VRT and isolate one feature, showing you its attributes.

ogrinfo C:\xGIS\Vector\parcels\parcels_cama_20140829_pmerc.ovf -sql " SELECT tms, owner_name, the_wkt FROM WKTGrid WHERE tms = 'R39200-02-31' "

In some cases, OGR may have trouble identifying your geometry field, or you may have multiple geometry fields and want to specify one field in particular. If so, note the following changes, specifically the modification to the SrcSQL node and the added GeometryField node.

  <OGRVRTLayer name="WKTGrid">
    <SrcDataSource>PG:host= user=postgres dbname=gis password=l00per</SrcDataSource>
    <SrcSQL>SELECT ST_AsBinary(wkb_geometry) as geomm, tms, owner_name FROM parcels_cama_20140829_pmerc</SrcSQL>
    <GeometryField encoding="WKB" field="geomm"></GeometryField>

And this is just scratching the surface. Make sure to check out the OGR VRT driver page for a complete list of options available to you.

Written by elrobis

September 24th, 2014 at 4:45 pm

MySQL Implementation of Google’s Encoded Polyline Algorithm

with one comment

I just noticed someone created a MySQL implementation of Google’s Encoded Polyline Algorithm incorporating some elements of my PostgreSQL/PostGIS approach, specifically, support for multi-part, multi-ring polygon geometries.

And this is exciting, at the bottom of his post, Fabien says he’s working on a solution for consuming the Google encoded geometries in Leaflet. Nice! I love open source software!

I thought I’d mention the MySQL approach here to help expose Fabien’s solution to English searches and drive a little more traffic to his site. If you can’t read French, Google Translate is your friend!

FWIW–if anyone is interested and so inclined–there is a potential improvement that could be made to the Google Encoded Polyline implementation allowing users to specify a desired number of significant digits before rounding the Latitude and Longitude coordinate values prior to encoding. If memory serves, the documentation for Google’s algorithm allows/expects only 5 significant digits. But on some datasets with curved line features (often just heavily sampled line-segments), this limitation of 5 significant digits degrades the data, and you end up with jagged line features. So an improved solution would provide an optional parameter to specify significant digits.

All else equal–nice work, Fabien!

Written by elrobis

September 22nd, 2014 at 4:26 pm

Convert Google Maps Polygon (API V3) to Well Known Text (WKT) Geometry Expression

with 6 comments

There’s dozens of reasons why you might want the Well Known Text (WKT) geometry expression for a Google Maps Polygon object.

Assuming you’re using the Google Maps API V3, and you’ve got a variable referencing your Polygon, I’ll suggest two approaches you can take to iterate over the paths and vertices in your Google Maps polygon and return the geometry expression as a Well Known Text string.

Add a Simple Utility Method to Your Project

Easy enough. Just add the following method to your project. Look below the method for an example of how you’d call it.

function GMapPolygonToWKT(poly)
 // Start the Polygon Well Known Text (WKT) expression
 var wkt = "POLYGON(";

 var paths = poly.getPaths();
 for(var i=0; i<paths.getLength(); i++)
  var path = paths.getAt(i);
  // Open a ring grouping in the Polygon Well Known Text
  wkt += "(";
  for(var j=0; j<path.getLength(); j++)
   // add each vertice and anticipate another vertice (trailing comma)
   wkt += path.getAt(j).lng().toString() +" "+ path.getAt(j).lat().toString() +",";
  // Google's approach assumes the closing point is the same as the opening
  // point for any given ring, so we have to refer back to the initial point
  // and append it to the end of our polygon wkt, properly closing it.
  // Also close the ring grouping and anticipate another ring (trailing comma)
  wkt += path.getAt(0).lng().toString() + " " + path.getAt(0).lat().toString() + "),";

 // resolve the last trailing "," and close the Polygon
 wkt = wkt.substring(0, wkt.length - 1) + ")";

 return wkt;

Here’s how you’d access the Well Known Text expression using the utility method:

// Assuming you've already instantiated "myPolygon" somewhere.
var wkt = GMapPolygonToWKT(myPolygon);

Extend Google’s Polygon Object Prototype with a ToWKT() Method

There’s nothing wrong with the first approach, but you might find it handy to extend Google’s Polygon object prototype, itself, to include a ToWKT() member function, which makes it even easier to get its Well Known Text. To do that, add the following JavaScript somewhere near the top of your code (caveat—this will need to be called after the Google Maps library has been loaded):

if (typeof google.maps.Polygon.prototype.ToWKT !== 'function')
 google.maps.Polygon.prototype.ToWKT = function()
  var poly = this;
  // Start the Polygon Well Known Text (WKT) expression
  var wkt = "POLYGON(";
  var paths = poly.getPaths();
  for(var i=0; i<paths.getLength(); i++)
   var path = paths.getAt(i);
   // Open a ring grouping in the Polygon Well Known Text
   wkt += "(";
   for(var j=0; j<path.getLength(); j++)
   // add each vertice, automatically anticipating another vertice (trailing comma)
   wkt += path.getAt(j).lng().toString() + " " + path.getAt(j).lat().toString() + ",";
   // Google's approach assumes the closing point is the same as the opening
   // point for any given ring, so we have to refer back to the initial point
   // and append it to the end of our polygon wkt, properly closing it.
   // Additionally, close the ring grouping and anticipate another ring (trailing comma)
   wkt += path.getAt(0).lng().toString() + " " + path.getAt(0).lat().toString() + "),";
  // resolve the last trailing "," and close the Polygon
  wkt = wkt.substring(0, wkt.length - 1) + ")";
  return wkt;

If you prefer the second approach, you can get the Well Known Text expression like this:

// Assuming you've already instantiated "myPolygon" somewhere.
var wkt = myPolygon.ToWKT();

Written by elrobis

June 6th, 2014 at 10:44 am

PostGREsql/PostGIS Implementation of Google’s Encoded Polyline Algorithm

with 8 comments

[Edit 30 Jan, 2014]

I added an additional PostGREsql method to perform Polygon encoding by concatenating polygon geometries (delimiter: †) and their inner rings (delimiter: ‡) together into one massive encoded block of ring features. I also provided an example JavaScript method demonstrating how to bring the amalgamated polygon feature encodings into your Google Map.

By some uncanny twist of the fates, I’ve elected to use, had to use, and/or been asked to develop applications that use Google Maps ASCII Encoded Polyline expressions. In previous encounters, I’ve used a PHP class to handle the encoding task, and most recently I wrote a Python method to decode these expressions so that I could return a 3rd-party’s encoded geometries to WKT and import them into a spatially aware database.

So far so good.

However one thing has always bugged me about using the PHP solution–I don’t like using a piece of middleware to handle what I consider to be a responsibility of the data layer. Mark McClure’s page, which is basically the seminal authority on this topic, provides external links to implementations for Perl, Ruby, PHP (note: I prefer the PHP class linked, above), Java, and Mathematica. Also, by searching Stack Overflow, you can find implementations of the algorithm in both C# and VB.Net. But for all my efforts searching, I could never dredge up an implementation for either MySQL or PostGREsql/PostGIS. Bummer.

Looking up, it seems version 2.2 of PostGIS might include some built-in Google encoding conversion methods. While this is cool enough for a hat tip, unfortunately, it’s too inconvenient to wait that long, and even then, there’s no guarantee the implementation will work the way I expect with complex Polygon geometries; for instance, maybe it will encode only the exterior ring of Polygons, ignoring MultiPolygons completely, etc. For that matter, it’s equally possible there could be some bugs. So with this said, and even though the previously-mentioned PHP implementation does the job, my boss was cool-enough to let me take a crack at implementing the algorithm as a PostGREsql/PostGIS function, and then share the results with the world. Since some initial testing confirms my PostGIS implementation works, I’ll just post the code parts and hope others find it useful.

For what it’s worth, if anyone finds a bug or has recommendations for improvements, please don’t hesitate to drop me a line.


Sample query calling the first encoding function on the EXTERIOR RING of Polygon geometries:
(Also works on single-part LINESTRING features.)

 * Note that the encoding method can accept a LINESTRING only, which
 * is the geometry type used to represent the ring parts of a Polygon.
 * To help understand this, and why, please see the trailing discussion
 * section, which elaborates on this situation further.
  GoogleEncodeLine(ST_ExteriorRing(wkb_geometry)) as Google
  FROM polygons_wgs84
  WHERE ST_GeometryType(wkb_geometry) = 'ST_Polygon'
  LIMIT 10 ;


[Added 30 Jan, 2014]

Sample query calling the second encoding function on Polygon and MultiPolygon geometries:
(Preserves multi-part polygons and their inner-ring parts, a.k.a. “holes”.)

 * This encoding method will accept Polygon and MultiPolygon geom types.
 * The output returned is an amalgamation of Polyline encodings, where
 * individual geometries and their interior rings are concatenated
 * together using string delimiters, †, and ‡, respectively.
  GoogleEncodePolygon(wkb_geometry) as GooglePolygon
  FROM polygons_wgs84
  LIMIT 10 ;


Implementation functions to execute/save in your PostGREsql instance:

[Added 30 Jan, 2014]

 * Pass in either a Polygon or MultiPolygon geometry. Returns
 * an array of ASCII-encoded Polygon feature parts, including
 * multi-part geometries and their interior rings.
 ng INT;        -- Store number of Geometries in the Polygon.
 g INT;         -- Counter for the current geometry number during outer loop.
 g2 GEOMETRY;   -- Current geometry feature isolated by the outer loop.
 nr INT;        -- Store number of internal ring parts in the Polygon.
 r INT;         -- Counter for the current inner-ring part.
 r1 GEOMETRY;   -- Exterior ring part isolated BEFORE the inner loop.
 r2 GEOMETRY;   -- Inner-ring part isolated within the inner loop.
 gEncoded TEXT; -- Completed Google Encoding.
 gEncoded = '';
 ng = ST_NumGeometries(g1);
 g = 1;
 FOR g IN 1..ng BY 1 LOOP
     g2 = ST_GeometryN(g1, g);
     if g > 1 then gEncoded = gEncoded || chr(8224); END IF;
     -- Get ExteriorRing now; if there are any holes, get them later in the loop..
     r1 = ST_ExteriorRing(g2);
     gEncoded = gEncoded || GoogleEncodeLine(r1);
     nr = ST_NRings(g2);
     if nr > 1 then
       -- One (1) is because interior rings is one-based.
       -- And nr-1 is because ring count includes the boundary.
       FOR r IN 1..(nr-1) BY 1 LOOP
         r2 = ST_InteriorRingN(g2, r);
         gEncoded = gEncoded || chr(8225) || GoogleEncodeLine(r2);
       END LOOP;
     END IF;
 RETURN gEncoded;
$$ LANGUAGE plpgsql;


 * First of two methods. Pass in a geometry (LINESTRING only).
 * Returns ASCII-encoded point array for use in Google Maps.
  p INT; np INT;
  deltaX INT;
  deltaY INT;
  enX VARCHAR(255);
  enY VARCHAR(255);
  gEncoded TEXT;
  gEncoded = '';
  np = ST_NPoints(g);

  IF np > 3 THEN
    g = ST_SimplifyPreserveTopology(g, 0.00001);
    np = ST_NPoints(g);

  pt1 = ST_SetSRID(ST_MakePoint(0, 0),4326);

  FOR p IN 1..np BY 1 LOOP
    pt2 = ST_PointN(g, p);
    deltaX = (floor(ST_X(pt2)*1e5)-floor(ST_X(pt1)*1e5))::INT;
    deltaY = (floor(ST_Y(pt2)*1e5)-floor(ST_Y(pt1)*1e5))::INT;
    enX = GoogleEncodeSignedInteger(deltaX);
    enY = GoogleEncodeSignedInteger(deltaY);
    gEncoded = gEncoded || enY || enX;

    pt1 = ST_SetSRID(ST_MakePoint(ST_X(pt2), ST_Y(pt2)),4326);
RETURN gEncoded;
$$ LANGUAGE plpgsql;


 * Second of two methods. Accepts a signed integer (LON or LAT
 * by 1e5) and returns an ASCII-encoded coordinate expression.
  e VARCHAR(255);
  s BIT(32);
  b BIT(6);
  n INT;
 e = '';
 s = (c::BIT(32))<<1;

 IF s::INT < 0 THEN
   s = ~s;
   END IF;

 WHILE s::INT >= B'100000'::INT LOOP
   b = B'100000' | (('0'||substring(s, 28, 5))::BIT(6));
   n = b::INT + 63;
   e = e || chr(n);
   s = s >> 5;
 e = e || chr(s::INT+63);

$$ LANGUAGE plpgsql;


[Added 30 Jan, 2014]

JavaScript method demonstrating how to add Polygon encodings to a Google Map object:
(This client implementation works for either the single or the multi-part polygons.)

 * JavaScript! Pass-in an encoded text block created by either
 * PostGIS method, GoogleEncodePolygon() or GoogleEncodeLine(),
 * and render it in your Google Map object. If you don't want
 * the map to zoom to each rendering, just remove the "bounds"
 * variable and any references to it.
function renderEncoded(encoded_path)
   var bounds = new google.maps.LatLngBounds();
   var $encodedGeoms = encoded_path.split("†");
   for (var i=0; i<$encodedGeoms.length; i++)
       var encodedGeom = $encodedGeoms[i];
       var $encodedRings = encodedGeom.split("‡");
       var polyPaths = [];
       for (var j=0; j<$encodedRings.length; j++)
           var ptarray = google.maps.geometry.encoding.decodePath($encodedRings[j]);
       var polygonObject = new google.maps.Polygon(
         paths: polyPaths,
         strokeColor: '#890000',
         strokeOpacity: 1.0,
         strokeWeight: 2


And some additional discussion..

There are two “gotchas” when it comes to implementing the encoding algorithm with respect to Polygons:

1) Polygons, as geometries, can be composed of many rings. The outer ring is considered to be the boundary, and various inner rings are often called “holes”. So this is a specified, understood, and accepted built-in many-to-one relationship between polygons and their internal ring geometries.

And 2) It’s not rare to find polygon tables containing both Polygon and MultiPolygon data types. I think this happens because ESRI allows it, and so in an effort to play well with others, other GIS systems have accommodated it. At least, I know this is true for MySQL and PostGIS.

Here’s why this makes trouble–Google’s encoding algorithm is only intended to represent individual point arrays as a singular geometry. Basically, as long as your first point equals your last point, it’s considered to be a closed geometry, and you can add it and render it in a Google Map as a polygon. The algorithm itself isn’t designed to represent nested arrays, which would be necessary to render either a Polygon with “holes” or a MultiPolygon, which could potentially define many polygons with holes of their own! As such, I suspect there could be considerable disagreement as to how a Polygon-to-Google-Encoded method should actually handle Polygons..

The only solutions I can imagine for this problem would require “faking” a one-to-many relationship by perhaps delimiting together several encodings to account for MultiPolygons and/or single feature Polygons with interior rings. But this starts to get weird. So to keep things somewhat simple for the sake of the post, I chose to stay true to the algorithm’s intent and return a single encoded geometry expression from my method. And the sample query demonstrates this by calling the method against the outermost ring (i.e. the boundary) of a Polygon geometry type, which PostGREsql regards as a LineString, anyway.

[Added 30 Jan, 2014]

Since I wanted to handle the more complex geometries, I wrote the wrapper method GoogleEncodePolygon(), to first iterate over ST_NumGeometries() and gain access to any multi-part features, then second, iterate over ST_NRings() using ST_InteriorRingN()–you could also do this using ST_DumpRings()–and gain access to any interior rings of the Polygon geometries, themselves. Then, for each ring part, I call GoogleEncodeLine(), and concatenate together all those expressions into one massive block of “compound” expressions. I chose to delimit each geometry encoding using an extra-special character that would never be used by Google’s algorithm; for example chr(8224), which corresponds to “†”. I then further delimit the internal ring parts using another special character, chr(8225), which corresponds to “‡”, and return all these concatenated together as a compound encoding expression. Then, on the client-side (a JavaScript example is provided above), I merely split the compound expression against my delimiters, loop over the various expressions, and add them to the map individually. Note if you are attaching attributes to your features, you’ll need to remember to include them explicitly to each unique Polygon added to your map.

Written by elrobis

January 27th, 2014 at 12:20 pm

PostGIS: query all multipolygon parcels with at least one hole

without comments

I was writing some code to iterate over Well Known Text expressions for polygon features, and I decided I needed to test the most complex edge-case I could think of–multipolygon geometries where at least one of the bound polygons has a hole (i.e. an interior ring).

I ended up with the following query. This seems like the kind of thing I’ll want to reuse later, so I’m noting it here. For good measure, I also use a rudimentary technique to sort the output with the most complicated geometries in the table at the top of the list. Basically, the more “text” it takes to describe the geometry using Well Known Text, the larger and more complex I figure it must be!

  SomePrimaryId,   /* your primary key, i.e. ogc_fid, etc. */
  SomeUniqueId,    /* your descriptive id, i.e. a parcel number */
  ST_NumGeometries(wkb_geometry) AS num_geoms,
  ST_NRings(wkb_geometry) AS num_rings,
  ST_AsText(ST_Centroid(wkb_geometry)) AS center,
  Char_Length(ST_AsText(wkb_geometry)) AS len,
  ST_AsText(wkb_geometry) AS wkt
FROM SomePolygonTable
  ST_NumGeometries(wkb_geometry) > 1
  ST_NRings(wkb_geometry) > ST_NumGeometries(wkb_geometry)
ORDER BY Char_Length(ST_AsText(wkb_geometry)) ASC ;


Just for the sake of promoting caution, I’m not certain this is a definitive approach for identifying the largest geometry in a table, as the length of the binary representation and the length of the readable text representation do not correspond one-to-one. Moreover, a feature could have more vertices that required less precision to express (fewer decimal position), than a geometry with fewer vertices that needed more precision, and then you have to ask, which is bigger, fewer vertices and more text, or more vertices that coincidentally did not require as much text? My conclusion is, the “most complicated geometry” is probably relative to the one asking the question. However for my purposes, this was close enough to put the most complicated stuff at the top of the list.

Written by elrobis

November 8th, 2013 at 10:26 am

Install httplib2 to your Preferred Python Runtime after ArcGIS Unceremoniously Hijacks the First Attempt

without comments

We’re going to use Google Maps Engine for some stuff, and so I thought I’d throw my first codes at it using Python. Well.. Google’s Python example requires the httplib2 library, so I needed to install that. When I did—following the library’s download/instructions—for some reason, the library went into service against ArcGIS’s embedded Python runtime, rather than my favored 2.7 instance, which I use for every single thing.


To fix this, I just ensured the new library’s setup.py script was called from my preferred Python runtime rather than let Windows, or some black magic, decide which instance it should integrate with httplib2.

Breaking it down:

1) cd into the unpacked httplib2 directory:

C:\>cd C:\Users\elijah\Downloads\httplib2-0.8

2) Next, you can qualify which Python runtime you want to execute the supplied setup.py install script—in my case, this worked to install httplib2 to that specific runtime. This was the exact command I used:

C:\Users\elijah\Downloads\httplib2-0.8>C:\Python27\python.exe setup.py install

Voila! Now when I pull up IDLE, relative to my preferred instance of Python 2.7, I can import httplib2 without issues.

It didn’t take too long to figure this out—but it seemed like an easy post should someone else be stumped and on the wrong foot.

Written by elrobis

August 5th, 2013 at 9:42 am

Posted in Python

Tagged with

osm2pgsql help and usage

without comments

I wanted a more convenient place to read the osm2pgsql help output, so this seemed as good a place as any for it. /E

osm2pgsql -h
osm2pgsql SVN version 0.80.0 (32bit id space)
        osm2pgsql [options] planet.osm
        osm2pgsql [options] planet.osm.{gz,bz2}
        osm2pgsql [options] file1.osm file2.osm file3.osm
This will import the data from the OSM file(s) into a PostgreSQL database
suitable for use by the Mapnik renderer
   -a|--append          Add the OSM file into the database without removing
                        existing data.
   -b|--bbox            Apply a bounding box filter on the imported data
                        Must be specified as: minlon,minlat,maxlon,maxlat
                        e.g. --bbox -0.5,51.25,0.5,51.75
   -c|--create          Remove existing data from the database. This is the
                        default if --append is not specified.
   -d|--database        The name of the PostgreSQL database to connect
                        to (default: gis).
   -i|--tablespace-index        The name of the PostgreSQL tablespace where
                        all indexes will be created.
                        The following options allow more fine-grained control:
      --tablespace-main-data    tablespace for main tables
      --tablespace-main-index   tablespace for main table indexes
      --tablespace-slim-data    tablespace for slim mode tables
      --tablespace-slim-index   tablespace for slim mode indexes
                        (if unset, use db's default; -i is equivalent to setting
                        --tablespace-main-index and --tablespace-slim-index)
   -l|--latlong         Store data in degrees of latitude & longitude.
   -m|--merc            Store data in proper spherical mercator (default)
   -M|--oldmerc         Store data in the legacy OSM mercator format
   -E|--proj num        Use projection EPSG:num
   -u|--utf8-sanitize   Repair bad UTF8 input data (present in planet
                        dumps prior to August 2007). Adds about 10% overhead.
   -p|--prefix          Prefix for table names (default planet_osm)
   -s|--slim            Store temporary data in the database. This greatly
                        reduces the RAM usage but is much slower. This switch is
                        required if you want to update with --append later.
                        This program was compiled on a 32bit system, so at most
                        3GB of RAM will be used. If you encounter problems
                        during import, you should try this switch.
      --drop            only with --slim: drop temporary tables after import (no updates).
   -S|--style           Location of the style file. Defaults to /usr/local/share/osm2pgsql/default.style
   -C|--cache           Now required for slim and non-slim modes:
                        Use up to this many MB for caching nodes (default: 800)
   -U|--username        Postgresql user name
                        password can be given by prompt or PGPASS environment variable.
   -W|--password        Force password prompt.
   -H|--host            Database server hostname or socket location.
   -P|--port            Database server port.
   -e|--expire-tiles [min_zoom-]max_zoom        Create a tile expiry list.
   -o|--expire-output filename  Output filename for expired tiles list.
   -r|--input-reader    Input frontend.
                        libxml2   - Parse XML using libxml2. (default)
                        primitive - Primitive XML parsing.
                        pbf       - OSM binary format.
   -O|--output          Output backend.
                        pgsql - Output to a PostGIS database. (default)
                        gazetteer - Output to a PostGIS database suitable for gazetteer
                        null  - No output. Useful for testing.
                        Include attributes for each object in the database.
                        This includes the username, userid, timestamp and version.
                        Note: this option also requires additional entries in your style file.
   -k|--hstore          Add tags without column to an additional hstore (key/value) column to postgresql tables
      --hstore-match-only       Only keep objects that have a value in one of the columns
      -                         (normal action with --hstore is to keep all objects)
   -j|--hstore-all      Add all tags to an additional hstore (key/value) column in postgresql tables
   -z|--hstore-column   Add an additional hstore (key/value) column containing all tags
                        that start with the specified string, eg --hstore-column "name:" will
                        produce an extra hstore column that contains all name:xx tags
   -G|--multi-geometry  Generate multi-geometry features in postgresql tables.
   -K|--keep-coastlines Keep coastline data rather than filtering it out.
                        By default natural=coastline tagged data will be discarded based on the
                        assumption that post-processed Coastline Checker shapefiles will be used.
      --number-processes                Specifies the number of parallel processes used for certain operations
                        Default is 1
   -I|--disable-parallel-indexing       Disable indexing all tables concurrently.
      --unlogged        Use unlogged tables (lost on crash but faster). Requires PostgreSQL 9.1.
      --cache-strategy  Specifies the method used to cache nodes in ram.
                                Available options are:
                                dense: caching strategy optimised for full planet import
                                chunked: caching strategy optimised for non-contigouse memory allocation
                                sparse: caching strategy optimised for small extracts
                                optimized: automatically combines dense and sparse strategies for optimal storage efficiency.
                                           optimized may use twice as much virtual memory, but no more physical memory
                                   The default is "chunked"
   -h|--help            Help information.
   -v|--verbose         Verbose output.
Add -v to display supported projections.
Use -E to access any espg projections (usually in /usr/share/proj/epsg)

Written by elrobis

December 28th, 2012 at 9:21 am

Posted in osm2pgsql

Tagged with

Resident “lunatic” of LinkedIn

without comments

Just updated my LinkedIn profile.. This is just TOO FUNNY right now!

I totally had to save this for posterity. Unfortunately it will probably disappear before you can say “Jack Robinson”. 8)

Written by elrobis

December 21st, 2012 at 11:04 am

Posted in Uncategorized

mapnik rundemo.exe error: run from within the demo/c++ folder?

without comments

So I just installed the mapnik 2.0.1 binaries for Windows, and I ran into a “gotcha.” I couldn’t figure this out by Googling, so hopefully this post will help someone.  (Be sure to note the embarrassing conclusion.)

Specifically, following a fresh mapnik install, rundemo.exe told me this:

usage: ./rundemo <mapnik_install_dir>
Usually /usr/local/lib/mapnik
Warning: ./rundemo looks for data in ../data/,
Therefore must be run from within the demo/c++ folder.

Note the first bit:  usage: ./rundemo <mapnik_install_dir>


Well ok, that’s fine. Except this didn’t work, either:

C:\mapnik-2.0.1rc0\demo\c++>rundemo C:\mapnik-2.0.1rc0
running demo …
looking for ‘shape.input’ plugin in… C:\mapnik-2.0.1rc0/input/
looking for DejaVuSans font in… C:\mapnik-2.0.1rc0/fonts/DejaVuSans.ttf
### Configuration error: Could not create datasource. No plugin found for type ‘shape’ (searched in: C:\mapnik-2.0.1rc0/input/)

Hmm, but that is my install path??


Anyway, I finally figured it out. This works:

C:\mapnik-2.0.1rc0\demo\c++>rundemo C:/mapnik-2.0.1rc0/lib/mapnik
running demo …
[… yadda yadda …]


Now for the embarrassing part—I was juxtaposing between this install tutorial and this other install tutorial, and clearly I did not pay close attention to the second. It turns out, if my eyes were peeled, I might have noticed this:

Then you should be able to run the demo program:

cd c:\mapnik-2.0.1rc0\demo\c++ rundemo ..\..\lib\mapnik


I’d like to quote Donald Knuth and re-state: “Software is hard”, but this is just a good, old-fashioned case of I-was-dumb. Nevertheless, I was stumped and Googling didn’t help, so maybe this will save some the confusion. :/


“You know those guitars that are like ..double guitars?!?!”

Written by elrobis

December 18th, 2012 at 12:27 pm

Posted in Mapnik

Tagged with

osm2pgsql and windows errors: failed to start MSVCR90.dll, Connection to database failed, etc..

with one comment

I’m following the BostonGIS tutorial(s) to learn how to setup an OpenStreetMap tile server on Windows (XP 32, cos’ that’s what’s on the desk), and I’m running into headache after headache. So this post notes the gotcha’s I’m encountering and how I’m fixing them (optimistically assuming I fix all of them).

“Thar be dragons..” 8)


Part I: “Failed to start MSVCR90.dll”

First, as recommended I tried using the HOTOSM installer, but when I launch it using the example command provided on the BostonGIS site, I got the error above. I’d logged out/in, rebooted, ad nauseam, to no avail. Also the problem can’t be my redist libs–this install of XP is only a couple months old, and I have all the redists libs I know of (2005, 2008, 2010—plus, when I try reinstalling a redist it says I have the latest). I consider uninstalling and reinstalling Visual Studio to be more trouble than just trying something else to get the OSM download into PostGIS, so..

Next I tried the latest “alpha” build of osm2pgsql (rather than the all-in-one HOTOSM installer), and that  got me around the MSVCR/redist issue. That is, it’s launching..


Part II: the BostonGIS osm2pgsql example command

Error: Connection to database failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket “/tmp/.s.PGSQL.5432”?

…and then…

Error: Connection to database failed: fe_sendauth: no password supplied

…and then…

Out of memory for node cache dense index, try using “–cache-strategy sparse” instead

…and then…

Out of memory for sparse node cache, reduce –cache size

I’ve benefitted greatly from the BostonGIS tutorials, and I’m very thankful for them. (Thank you Regina, Leo, et. al., sincerely!) Having said that, though, here’s a few notes discussing how I addressed the above, for which a couple items are conspicuously missing in the BostonGIS osm2pgsl example command.

So take a look at this:

osm2pgsql missouri.osm.bz2 -H localhost -d OSM -U postgres -P 5432 -W -S default.style -C 512 –slim –hstore –cache-strategy sparse

What’s of-note:

-H localhost (solves the first error)

-W (solves the second error, i.e. lets you enter your db password after ENTER)

–cache-strategy sparse (gets past the third, and into the fourth error)

–slim (from everything I read, you just need this on a 32-bit system)

-C 512 (wicked-small cache size)


21 Minutes. I’m not talking about an episode of Friends.

So those are the extra command parameters that ultimately got me to end-game:

Osm2pgsql took 1317s overall

Setting that wicked-small cache size finally did the trick (it actually finished while I was writing this). At first I started it larger (12000, look under “Parameters”), then I started backing off (4096, 2048 look under “Loading data into your server”), when all the examples I found didn’t work, I just tried reducing by the common multiples—again, 1024 didn’t work, but 512 did.

I need to do some research, but I guess that cache size suggests my hard disk must be ..wholly ..awful. :/ I’m not a gear-hound (clearly, I’m using XP), but that’s my best-guess.

Written by elrobis

December 18th, 2012 at 9:55 am

Decode Google Map encoded points as Well Known Text (WKT) with Python

with 2 comments

I had close encounter of the 5th kind yesterday.. here’s the gist..

It started when someone “gave” me a GIS dataset (..of polygons, kind of..) that a colleague of theirs, way back in ancient history, chose to pre-cook as ASCII-encoded point pairs. Their intention was almost certainly to use the pre-cooked data in Google Maps. Anyway, being arguably sane, I wanted to return this data to a more GIS-normal format so I could put it in a database like MySQL or Post and use it for other stuff.

I considered a few different approaches to this problem, including creating a Google Map that could load-in all of the polygons from their encodings, then iterate over the polygons, interrogate the polygon point pairs, and finally concatenate WKT features from the points and save the geofeatures into a MySQL table. This approach offered the advantage of using Google’s existing Maps API to do the decoding for me. But let’s face it, that’s lame, uninspired, and not inventive.. it wasn’t even interesting.

Besides..  I wanted to use Python.

I expected to find a Python recipe for this looming in the misty www, but I didn’t. However, I did find a JavaScript recipe by trolling around in Mark McClure’s website. Specifically, he provides a Polyline Decoder utility, and when I viewed the page source, I found the JavaScript code that actually does the decoding (opening in FireFox will show you the code, or IE should prompt you to download the file).

Long story short, the following Python method is an adaptation of Mark McClure’s JavaScript method (twisted a little to return WKT features rather than the pure point array). If you’re somewhat comfortable with Python, you should be able to copy/paste the method right into your Python file and start calling it; just pass-in the encoded point string and let the method do the rest.

Best / Elijah


def decodeGMapPolylineEncoding(asciiEncodedString):
    print "\nExtrapolating WKT For:"
    print asciiEncodedString

    strLen = len(asciiEncodedString)

    index = 0
    lat = 0
    lng = 0
    coordPairString = ""

    # Make it easy to close PolyWKT with the first pair.
    countOfLatLonPairs = 0
    firstLatLonPair = ""
    gotFirstPair = False

    while index < strLen:
        shift = 0
        result = 0

        stayInLoop = True
        while stayInLoop:                                                # GET THE LATITUDE
            b = ord(asciiEncodedString[index]) - 63
            result |= (b & 0x1f) << shift
            shift += 5
            index += 1

            if not b >= 0x20:
                stayInLoop = False

        # Python ternary instruction..
        dlat = ~(result >> 1) if (result & 1) else (result >> 1)
        lat += dlat

        shift = 0
        result = 0

        stayInLoop = True
        while stayInLoop:                                                # GET THE LONGITUDE
            b = ord(asciiEncodedString[index]) - 63
            result |= (b & 0x1f) << shift
            shift += 5
            index += 1

            if not b >= 0x20:
                stayInLoop = False

        # Python ternary instruction..
        dlng = ~(result >> 1) if (result & 1) else (result >> 1)
        lng += dlng

        lonNum = lng * 1e-5
        latNum = lat * 1e-5
        coordPairString += str(lonNum) + " " + str(latNum)

        if gotFirstPair == False:
            gotFirstPair = True
            firstLatLonPair = str(lonNum) + " " + str(latNum)

        countOfLatLonPairs += 1

        if countOfLatLonPairs > 1:
            coordPairString += ","

    # The data I was converting was rather dirty..
    # At first I expected 100% polygons, but sometimes the encodings returned only one point.
    # Clearly one point cannot represent a polygon. Nor can two points represent a polygon.
    # This was an issue because I wanted to return proper WKT for every encoding, so I chose
    # To handle the matter by screening for 1, 2, and >=3 points, and returning WKT for
    # Points, Lines, and Polygons, respectively, and returning proper WKT.
    # It's arguable that any encodings resulting in only one or two points should be rejected.
    wkt = ""
    if countOfLatLonPairs == 1:
        wkt = "POINT(" + coordPairString + ")"
    elif countOfLatLonPairs == 2:
        wkt = "POLYLINE(" + coordPairString + ")"
    elif countOfLatLonPairs >= 3:
        wkt = "POLYGON((" + coordPairString + "," + firstLatLonPair + "))"

    return wkt

Written by elrobis

October 20th, 2012 at 1:00 pm

PostGIS: count all features of each GeometryType in a spatial table

with one comment

Sometimes, just when you think you’ve got something figured out –you get reminded that you really don’t.  :/

As you may know, ESRI allows for single and muli-part geometries to live in the same FeatureClass. So, if I have a shapefile of roads, there might be both LINESTRING and MULTILINESTRING features in that dataset. I live just loose-enough not to care about that. But I do need to be aware of it when I’m cobbling data in PostGIS.

In thise case, I was getting a PostGIS error tying to do a Dissolve-By-SQL, so I thought why not get a quick count of each GeometryType in the dataset? Maybe I was running into issues single and multi-part geometries were blurred together. It took me an embarassing chunk of time to get this right, so I figured I’d post the recipe  in case I needed a reminder later.

  GeometryType( wkb_geometry ) as geomType,  
  COUNT( wkb_geometry ) as featureCount
FROM anyGeoDataTable
WHERE wkb_geometry IS NOT NULL
GROUP BY GeometryType( wkb_geometry );


[Update 4.16.2012]
I wanted to do the same thing in MySQL the other day. It’s essentially the same query, but note the need to use CONVERT() function to make the output properly render in MySQL Workbench:

  CONVERT( GeometryType( shape ) USING utf8 ) as geomType,
  COUNT( shape ) as featureCount
FROM anyGeoDataTable
GROUP BY GeometryType( shape );


Without applying the CONVERT() function, MySQL Workbench just shows “blob” in the return set.

Anyway.. back to cracking some nut..  :]

Written by elrobis

January 21st, 2012 at 9:17 pm

Posted in PostGIS

Tagged with ,

Prepare a Shapefile for OpenScales using ogr2ogr and PostGREsql

with one comment

This post explains how to import GIS data (a shapefile, in this case) into a database (PostGREsql) so it can be consumed by most any mapping API. I have OpenScales in mind, but this approach will support any mapping app with functions for rendering feature overlays using geodetic coordinates (i.e Longitude and Latitude). In many cases, you’ll need to translate your feature data out of a projected/cartesian system and into a geodetic/spherical system; so I’ll include a demonstration of that.

Quick and Dirty Summary

This approach has two parts. First, we’ll use GDAL’s ogr2ogr utility to import a shapefile into our database. Second, we’ll use a few SQL commands to translate our data from a projected to a geodetic system, as well as optimize the table for fast query speeds.


The following prerequisites will need to be met in order to follow along:

1) GDAL is installed. If you need to install GDAL, check out my earlier post titled Install GDAL on Windows. Alternatively, you could install FWTools, which is admittedly easier, but that package is no longer maintained and it’s becoming out-of-date as GDAL/OGR continues to evolve.

2) PostGREsql is installed, and the PostGIS extension is enabled. If you need to install PostGREsql and PostGIS, check out the tutorial at Boston GIS demonstrating how to acquire and install PostGREsql with PostGIS. It won’t hurt to reveiw their entire tutorial, but I deviate from their approach once installation is complete (look for their sub-heading, Loading GIS Data Into the Database).

3) You have some geodata. I think the typical reader will have their own shapefile, pesonal geodatabase, or otherwise, but if you need something to follow along, here’s a US States shapefile projected to NAD83 Albers Equal Area Conic:


Loading Geodata into PostGREsql / PostGIS

To push shapefile data into your geodatabase, you can run an ogr2ogr script like this:

ogr2ogr -f “PostGreSQL” PG:”host= user=youruser dbname=yourdb password=yourpass” “E:\4_GIS\01_tutorials\usstates_nad83_aeac\usstates_albers.shp” -nln usstates -nlt geometry

For deeper reading on ogr2ogr utility flags (like -nln and nlt), check out the usage notes for ogr2ogr. Also, it may be worth you while to peruse the OGR PostgreSQL driver page, as well as the Advanced Driver Information page. In the meantime, here are a few quick notes regarding my script:

-f “PostGreSQL” PG:”host= user=youruser dbname=yourdb password=yourpass”  This tells OGR you’re exporting to PostGreSQL with the following connection string. Notice that my connection string is wrapped in double-quotes (“).

“E:\4_GIS\01_tutorials\usstates_nad83_aeac\usstates_albers.shp”   This is the path to my shapefile  input data. Once again, I wrapped this value in double-quotes (“). I do this to prevent the console from introducing linebreaks into the argument value and confusing the parser.

-nln usstates  The -nln flag means “rename the table on export”. In other words, my PostGREsql db will get a new table named usstates, and not one named usstates_albers.

-nlt geometry  This one’s particularly important for polygon data. It tells OGR “accept any geometry you encounter and store it in the feature’s geometry column”. Oftentimes, a polygon dataset will have polygons and multipolygons in the same table. For example here’s a narrow column of Well Known Text (WKT) geometries from the albers shapefile so you can see what I mean:

MULTIPOLYGON (((-1827806.2165497246 1227…..
POLYGON ((-1148108.0134327484 649421.311…..
MULTIPOLYGON (((1949555.0881544715 75264…..
POLYGON ((-199237.01920416657 704421.540…..
POLYGON ((-519870.38897088548 372373.616…..

If you run the ogr2ogr script noted above without -nlt geometry, you’ll get an error like this:

ERROR 1: Terminating translation prematurely after failed
translation of layer usstates_albers (use -skipfailures to skip errors)

By default, OGR refuses to mix geometry types in a table, so -nlt geometry allows you to duck that requirement and store both Polygon and Multipolygon features in the same table. You could optionally instruct OGR to “explode” Multipolygons into individual Polygons using the -explodecollections flag, as depicted in the following screenshot, but I don’t recommending that solution for the intended use case. For example, if a map user clicks on Michigan’s Upper Penenssula, I want the whole state to be selected, not just the UP. I’m not saying you can’t make that happen after exploding multifeatures; rather, it’s just not the approach I favor.

Without -nlt geometry, ogr2ogr will throw an error if it attempts to export polygons and multipolygons into the same table. Alternatively, you can use the flag -explodecollections (not recommended in this case) to translate your multipolygons into several polygons.

Assuming you used the script like the one I initially provided, you should be able to open pgAdmin III (the PostGREsql admin GUI that insalls with the database) and see your new usstates table:

Post-Processing your Geodata with SQL Instructions

With pgAdmin III open, expand the Tools menu and launch the Query tool. You’ll use the Query tool and the following SQL instructions to prep your data for production. I’ll start by listing all the queries together, then I’ll provide some deeper explaination in the text that follows.

SELECT SRID(wkb_geometry) FROM usstates;
SELECT * FROM spatial_ref_sys WHERE srid = 900925;
SELECT st_asText(wkb_geometry) FROM usstates;
UPDATE usstates SET wgs84geom = st_Transform(wkb_geometry, 4326); 
SELECT SRID(wgs84geom) FROM usstates;
SELECT st_asText(wgs84geom) FROM usstates;
VACUUM usstates;
CREATE INDEX usstates_wgs84_idx ON usstates USING GIST(wgs84geom);

Basically, the steps emphasized in blue do the actual work, while the steps in black are more for sanity checks. Steps 4 and 5 perform the geometry transformation, and the last two steps do some house-cleaning and table optimization. Now I’ll provide a one-by-one discussion of each step.

1) SELECT SRID(wkb_geometry) FROM usstates;

Here we’re getting the SRID for the features in this layer (which is 900925 on my system). By default, OGR will store feature geometries in a field called wkb_geometry. Also, your PostGIS installation includes a table named spatial_ref_sys that stores coordinate system definitions necessary for the database to remain “spatially aware” of your new table as well as the other spatial datasets the system is managing. Consider this, if you want to select points from one layer that fall inside polygons from another layer, PostGIS needs to understand the coordinate systems for both datasets so that it can align their features for analysis. So when we run the SRID() function on the table’s geometry field, wkb_geometry, PostGREsql will return the unique identifier for the coordinate system used to define the features in our table.

2) SELECT * FROM spatial_ref_sys WHERE srid = 900925;

In this step we answer the question, “Does the SRS established at import makes sense for the data?” This statement queries the PostGIS spatial_ref_sys table for the coordinate system whose ID is referenced in the previous step. Check the srText field for a readable version of the coordinate system. Mine begins with “PROJCS[“North_America_Albers_Equal_Area_Coni..”  That’s what I expected, and that’s a good thing.

3) SELECT st_asText(wkb_geometry) FROM usstates;

Now I like to do a quick query to see the WKT for some of my features. The geometry field wkb_geometry was created by ogr2ogr when it imported the shapefile into PostGREsql. If you don’t like this name, you can use the creation option -lco GEOMETRY_NAME=geom in your ogr2ogr import script to set the name of the geometry field at import time. As shown in the image, the WKT for my features looks like I would expect.

Querying feature geometries using the ST_asText() function on the geometry column, wkb_geometry.

4) ALTER TABLE usstates ADD COLUMN wgs84geom GEOMETRY;

This instruction adds a new column to the table, which I’ll use to store the feature geometries for my US States in geodetic coordinates. The column will be named wgs84geom and will expect data of type GEOMETRY. In other words, this field will store a permanent “cast” of our feature geometries in the WGS84 coordinate system, which is very popular due to its use by the famous Global Positioning System (GPS).

Note: GIS coordinate systems are complex beasts, and it’s easy to get lost in their particulars. Nevertheless, one distinction is very important, and that’s the difference between projected systems and geodetic systems. Projected systems are two-dimensional. —these are the X/Y grids you used for trigonometry exercises in High School. On the other hand, geodetic systems define coordinate geometries within a three-dimensional, spherical space.

Both systems are roses by many names. For instance, projected systems may be called “Cartesian” or “Geometric”. And Geodetic systems may be called “Sexigesimal” or “Geographic”. The PostGIS community may more often refer to features as being geometry, or geography data and mean projected vs. geodetic coordinates, respectively. So if you run across language like this, realize people intend for geometry to imply X and Y coordinates in a cartesian space, and for geography to mean familiar longitude and latitude coordinates.

In this image, state boundaries are drawn in Albers Equal Area Conic, which is a projected coordinate system.


Here the state boundaries are "represented" in NAD83 geographic coordinates. NAD83 geographic (EPSG 4269), is similar, if not nearly identical, to WGS84. As you can see, geodetic systems are difficult to represent in 2D space; as such, unprojected maps of large areas tend to look "suspect".

5) UPDATE usstates SET wgs84geom = st_Transform(wkb_geometry, 4326); 

With the new column ready to go, you can now wield an UPDATE statement and the st_Transform() function to translate feature geometries from their projected coordinates to their geographic WGS84 coordinates. The st_Transform() function expects two arguments, the source geometry field to transform, and the EPSG code for the output coordinate system. WGS84 is a fundamentally-popular coordinate system, and it’s EPSG code of 4326 is easy to find. If you do not know the EPSG code for your preferred coordinate system, head over to http://spatialreference.org/ and do some quick research. 

Note: We could optionally perform any coordinate system transformations in our queries by calling st_Transform on the geometry field right in the query. However, by casting our feature geometries in advance, we remove calculation overhead and get a subtle efficiency gain. This can particularly improve response times for spatial queries.

6) SELECT SRID(wgs84geom) FROM usstates;

Like the second step, this query is only intended to confirm whether the SRID established in the previous step makes sense for the data. It should return 4326.

7) SELECT st_asText(wgs84geom) FROM usstates;

Also like the third step, here I’m querying the feature geometrires as WKT to make sure they’re defined by longitude/latitude coordinate pairs.

Well Known Text (WKT) for features transformed into the WGS84 coordinate system.

8) VACUUM usstates;

Once you’re finished with the geometry transformation, call a VACUUM instruction for the usstates table. PostGREsql likes to have a deep knowledge of its feature tables so that it can optimize queries. To this end, the VACUUM command instructs PostGREsql to “gather fresh intel” on your table so that it can make better decisions. This step is particularly necessary for tables with a large number of features as well as tables experiencing a lot of maintenance (i.e. frequent feature INSERT and UPDATE activity).

9) CREATE INDEX usstates_wgs84_idx ON usstates USING GIST(wgs84geom);

Finally —if you intend to perform queries on this table, particularly spatial intersection queries on the new geometry column, you’ll want to create a spatial index referencing that column. Here, usstates_wgs84_idx is just a naming convention that implies TableName_FieldName_ThisIsAnIndex. To create an index, call the GIST() function on a table and pass in the table column you intend to search on —for instance, wgs84geom.

After following along with this post, you should have learned how to 1) use ogr2ogr to populate a PostGREsql database with shapefile data, 2) leverage PostGIS functions to perform a coordinate system transformation in the database, and 3) apply PostGREsql optimization functions to optimize the table for production use.

I hope you found this beneficial. Thanks for reading.


Written by elrobis

November 19th, 2011 at 5:55 pm

Posted in GDAL/OGR

Tagged with , , ,

ogr2ogr: Export Well Known Text (WKT) for one feature to a CSV file

without comments

Perhaps you’re looking for this?

ogr2ogr -f “CSV” “E:\4_GIS\NorthArkCartoData\UnitedStates\MO_wkt” “E:\4_GIS\NorthArkCartoData\UnitedStates\USStates.shp” -sql ” SELECT * FROM usstates WHERE STATE_NAME = ‘Missouri’ ” -lco “GEOMETRY=AS_WKT ” -lco “LINEFORMAT=CRLF” -lco “SEPARATOR=SEMICOLON”

My buddy at work needed a way to get the WKT geometry definition for a single feature in a shapefile. I thought, “surely this is something we can do with OGR?” Lo’ and behold, yes it was. 🙂

The script above uses OGR SQL to interrogate a shapefile for one lone feature and, when it’s found, exports the record to a comma separated values (CSV) file (or in this case, a semicolon delimited file). Here’s a quick break down:

-f “CSV” “E:\4_GIS\NorthArkCartoData\UnitedStates\MO_wkt”  This means CREATE the directory MO_wkt and export the derived output to this directory. I wrapped the file path in double-quotes (“) so linebreaks could not be introduced by the terminal and confuse the argument parser.

GOTCHA! DO NOT create the MO_wkt directory before running the script. Let OGR create that directory. However, relative to my example, E:\4_GIS\NorthArkCartoData\UnitedStates\ could be on your file system already.

“E:\4_GIS\NorthArkCartoData\UnitedStates\USStates.shp”  This is the shapefile you want to interrogate. Yet again, the file path is wrapped in double-quotes (“).

-sql ”  SELECT * FROM usstates WHERE STATE_NAME = ‘Missouri’  “   This is the OGR SQL expression I used to grab the entire record (*) for features where the state_name field was ‘Missouri’. Notice how the search term (‘Missouri’) is wrapped in single-quotes (‘). If the field you’re searching on is a VARCHAR or a TEXT field (obviously a name requires non-numeric characters), you’ll need to wrap your search term in single-quotes. If you were searching on a numeric field, you would not wrap your search term in quotes. Also, just like the file paths shown above, I recommend always wrapping long values and expressions in double-quotes to avoid confusing the argument parser. Next, and this is a matter of user-preference, but notice how I added extra spaces between the double-quotes and my SQL expression (i.e. ”  SELECT..  “). This is totally legit and it may help to distinguish the SQL query from the rest of the OGR script. 

I could have exported a subset of the fields using an expression like:

”  SELECT STATE_ABBR as abbr, STATE_NAME as name FROM usstates WHERE STATE_NAME = ‘Missouri’  ”

This expression selects only the STATE_NAME and the STATE_ABBR fields and renames them name and abbr, respectively, by applying the AS operator. The fields for the subset are comma-separated, but the last field is not followed by a comma (this is basic SQL syntax, but I’m explaining it in case the readership includes some first-timers). If you don’t want to rename the fields in your selection, just omit the “AS rename” from your expression. —when exporting to CSV, the geometry will be exported automatically according to the geometry flag used, which we set next. Unfortunately, though, this means there’s nothing you can do to omit the geometry from your selection.

These last few parameters use -lco flags to signalize various “creation options” used by the CSV driver. The creation options are unique for each dataset OGR can export, and in this case I’m demonstrating a particular recipie. If you want to know more about the CSV creation options, check out the OGR CSV driver page. Alternatively, if you want to export to a different file format, visit the OGR formats page, select the appropriate driver, and read-up on the various creation options.

-lco “GEOMETRY=AS_WKT “  This creation option means “Export the Geometry value as Well Known Text”. Other options that might interest you include GEOMETRY=AS_XYZ, GEOMETRY=AS_XY or GEOMETRY=AS_YX.

-lco “LINEFORMAT=CRLF”  This creation option means “use a Windows-formatted linebreak”. If you’re running Linux you should use “LINEFORMAT=LF”. Honestly, I can’t remember if we really needed this flag so you may want to try skipping this option.

-lco “SEPARATOR=SEMICOLON”  This flag specifies the delimiter used to separates fields and values from one another in the CSV. We chose SEMICOLON because Polygon WKT already has a bunch of commas in it, so using the SEMICOLON delimiter helped us to visually identify the WKT we were really looking for. Your other options include SEPARATOR=COMMA and SEPARATOR=TAB.

Alternatively, do this with ogrinfo..

You could also use ogrinfo to dump a feature’s WKT right into the console. This approach required using one of the so-called special fields recognized by OGR SQL. To learn more about OGR SQL, its particulars, and these special fields, visit the OGR SQL help page. Without further adieu, here’s my ogrinfo script:

ogrinfo “E:\4_GIS\NorthArkCartoData\UnitedStates\USStates.shp” -geom=yes -sql ” SELECT OGR_GEOM_WKT FROM usstates WHERE STATE_NAME = ‘Missouri’ “

In this case, the OGR SQL expression requests only the “OGR_GEOM_WKT” special field. Because we’re using ogrinfo, our output will stay in the console. However, this was ultimately undesirable for our task, as “real world” Polygon WKT typically spills beyond the console buffer. Moreover, copying the WKT geometry out of the console introduced hundreds of unwanted linebreaks into the WKT. So, for these reasons, we prefered to export our WKT results to a CSV because it allowed us to easily see, select, and copy the desired WKT data without any fuss using good ol’ Notepad with word wrap.

I hope you like. 🙂


P.S. If you don’t have OGR and want (correction, you NEED) to install it, check out my earlier post describing how to install GDAL/OGR on a Windows system.

Written by elrobis

November 18th, 2011 at 5:59 pm

Posted in GDAL/OGR

Tagged with , ,

OpenScales: Coordinate Transformation from Uncommon Projections

without comments


UPDATE 2.19.2013:

It turns out you don’t have to rebuild OpenScales to apply uncommon projections, you can just assign the projection directly to ProjProjection.defs. (See Simon L.’s answer here in the OpenScales user group.)

So if I modify my example toward the bottom of the post (at #5), here is how I would apply the alternate, and arguably better, approach.

(Fair warning, I haven’t compiled this and tested it yet, the only difference is the first line), but this is how I expect it should be. If it doesn’t work like this, it’s likely ‘cos an instance of ProjProjection needs to be created first, and the defs property of the instance needs to be appended. In other words, if this doesn’t work, follow the line instantiating moNADProj with moNADProj.defs[‘EPSG:102697’] = “+title…. blah blah”; One of these days I’ll make super-sure this is correct. But I didn’t want to forget where I found this, hence updating the post.)

private function transformCoordinates():void
         // This single line is the only difference:
         ProjProjection.defs['EPSG:102697']="+title=NAD_1983_StatePlane_Missouri_Central_FIPS_2402_Feet +proj=tmerc +lat_0=35.83333333333334 +lon_0=-92.5 +k=0.9999333333333333 +x_0=500000.0000000002 +y_0=0 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs";

         // Source point's projection, called using the KEY defined above..
         var moNADProj:ProjProjection = new ProjProjection("EPSG:102697");

         // Destination projection (WGS84), which is already available in proj4as..
	 var wgs84Proj:ProjProjection = new ProjProjection("EPSG:4326");

         // The source point, instantiated here using its easting (value along X-axis)
         // and northing (value along the Y-axis) parameters..
	 var projPt:ProjPoint = new ProjPoint(easting,northing);

         // The Proj4as.transform() method returns a ProjPoint object in the new
         // coordinate system; in this case, longitude and latitude.
	 var lonLatPt:ProjPoint = Proj4as.transform(moNADProj, wgs84Proj, projPt);


Original Post:

In short, I wanted to transform projected source coordinates from NAD_1983_StatePlane_Missouri_Central_FIPS_2402_Feet to geodetic coordinates in WGS84. As expected, OpenScales didn’t recognize my relatively uncommon projection, but I had high-hopes I could implement its properties piece-meal, possibly through the ProjParams class, or in some similar fashion.

At first I hoped I would find a static method on ProjProjection like  “ProjProjection.fromProj4Def(String:proj4Def)” which would return an instance of ProjProjection for any valid Proj4 expression, but such a method never emerged from the fog. I did, however, manage to “appened” my rogue projection into the proj4as source code, which ultimately solved my problem —-though admittedly, I still think a method like I described above would be a nice addition. 🙂

So getting to the point, if you need to add an uncommon projection to your OpenScales-derived project, here’s what you can do:

1) Remove openscales-proj4as-1.2.1.swc from your project (alternatively, you could mod these instructions and just recompile the swc) and copy the proj4as source package into your own project’s src directory. In other words, relative to your unpacked OpenScales download, you want to copy in the part I’m emphasizing in bold and underlined:


Adding the proj4as Code to your Project


2) Now open the file ProjProjection.as and scroll down to about line 20. You should see something like what follows; these are the various projections available through proj4as right out-of-the-box:

static public const defs:Object={
	'EPSG:900913': "+title=Google Mercator EPSG:900913 +proj=merc +ellps=WGS84 +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +no_defs",
	'WGS84': "+title=long/lat:WGS84 +proj=longlat +ellps=WGS84 +datum=WGS84 +units=degrees",
	'EPSG:4326': "+title=long/lat:WGS84 +proj=longlat +a=6378137.0 +b=6356752.31424518 +ellps=WGS84 +datum=WGS84 +units=degrees",
	'EPSG:4269': "+title=long/lat:NAD83 +proj=longlat +a=6378137.0 +b=6356752.31414036 +ellps=GRS80 +datum=NAD83 +units=degrees",

	[.. .. expect several lines of these .. ..]


3) At this point, you’ll add your projection’s Proj.4 text to the mix. If you don’t know the Proj.4 expression for your projection, drop by http://www.spatialreference.org, look up your projection, and select the Proj4 link. Once it loads just copy the definition text to your clipboard..

Getting the Proj4 Definition Expression for a Projection


4) Now, add your projection’s Proj.4 expression to the defs object, shown in bold, below. You will need to add the underlined part yourself, and frankly, these values are somewhat arbitrary. On the left, ‘EPSG:102697’ becomes the KEY you’ll use  to reference the projection later in code; and the title is even less significant. I recommend keeping your entries relatively sane. Basically, “make it look like the neighboring expressions”. Oh.. and you can put it anywhere in the list, separated by line-breaks if you prefer. I put mind right above the OSGB36/British National Grid entry:

static public const defs:Object={
	'EPSG:900913': "+title=Google Mercator EPSG:900913 +proj=merc +ellps=WGS84 +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +no_defs",
	'WGS84': "+title=long/lat:WGS84 +proj=longlat +ellps=WGS84 +datum=WGS84 +units=degrees",
	'EPSG:4326': "+title=long/lat:WGS84 +proj=longlat +a=6378137.0 +b=6356752.31424518 +ellps=WGS84 +datum=WGS84 +units=degrees",
	'EPSG:4269': "+title=long/lat:NAD83 +proj=longlat +a=6378137.0 +b=6356752.31414036 +ellps=GRS80 +datum=NAD83 +units=degrees",

	'EPSG:102697': "+title=NAD_1983_StatePlane_Missouri_Central_FIPS_2402_Feet +proj=tmerc +lat_0=35.83333333333334 +lon_0=-92.5 +k=0.9999333333333333 +x_0=500000.0000000002 +y_0=0 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs",

	[.. .. expect several lines of these .. ..]


5) That handles adding your projection to proj4as, assuming you want to consume this somewhere in your Flex/ActionScript application, a function like this one will perform your coordinate transformation (this is just to demonstrate the idea —-I don’t actually recommend instantiating both projections each time you want to do a conversion):

private function transformCoordinates():void
         // Source point's projection, called using the KEY defined above..
         var moNADProj:ProjProjection = new ProjProjection("EPSG:102697");

         // Destination projection (WGS84), which is already available in proj4as..
	var wgs84Proj:ProjProjection = new ProjProjection("EPSG:4326");

         // The source point, instantiated here using its easting (value along X-axis)
         // and northing (value along the Y-axis) parameters..
	var projPt:ProjPoint = new ProjPoint(easting,northing);

         // The Proj4as.transform() method returns a ProjPoint object in the new
         // coordinate system; in this case, longitude and latitude. Just to emphasize,
         // longitude is returned as X (i.e. x-intercept) and latitude as Y..
	var lonLatPt:ProjPoint = Proj4as.transform(moNADProj, wgs84Proj, projPt);

6) Enfin, and mostly for the sake of proving all this worked, here’s a screenshot of the application I’m working on (an AIR app that will integrate with a USB GPS device) showing the coordinate transformation in action.

The map, which is in Mo State Plane Central, tracks mouse movement as easting/northing values (bottom-center), and at the time, my mouse was hovering over Bass Pro Shops, which is a significant Springfield landmark to say the least. The proj4as-transformed coordinates are at bottom-right:

proj4as Coordinate Transformation in Action

If you want to scruitinize the accuracy of the coordinate transformation, here’s that location in Google Maps: http://maps.google.com/?ll=37.18069720,-93.29592962

And for the curious among you, my USB GPS device (coords at bottom-left) is cranking out quite accurately, too.. but that’s a  post for another day..

Cheers, Elijah

Written by elrobis

November 6th, 2011 at 12:25 pm

Nikon Coolpix 3100 Near Infrared Hack

without comments

While this post is not in-theme with the stated goals of the blog, I wanted to save my notes somewhere as well as make a URL to show-off for my friends. On the other hand, I’m opimistic my experiment will enable some “eco-minded research”, so perhaps there’s still a loose tether linking this back to the GIS community..

Basically, I hacked my old, 3 megapixel Nikon Coolpix 3100 today so that I could take NIR (near infrared) pictures. The following images serve as a storyboard for the process.

0:  The tools and materials involved include (clockwise from bottom-left)
—-> small round-nose pliers (my wife uses these for beading)
—-> small phillips head screwdiver
—-> butt-end of unprocessed camera film
—-> small curve-nosed pliers
—-> scissors
—-> electrical tape
—-> the Nikon Coolpix 3100 (the pic is post-hack)
—-> one depleted roll of hockey tape*

* The expired roll of hockey tape became the “housing” for a DIY visible bands filter, which you see fitted to the front of the camera. By some amazing coincidence the cardboard tubing holding the tape fit the camera perfectly. The “donut hole” was just the right diameter to cling to the face of the camera; plus, it was just deep-enough to stay clear of the lens when in use.



1:  Looking at the guts –I had to pull 22 screws to reach the NIR filter and CCD mount. At this point, I was relatively convinced the sucker was dead meat. The IR filter was just a piece of pinky-nail-sized-glass clinging to a rubber footing. I was able to gently tease the filter from the footing using my index finger.

Caveat: If you try a similar project, I recommend going to the trouble to illustrate or take notes documenting the locations of screws as you remove them. While my camera still worked without any screws being replaced (no kidding), once I was finished experimenting, I wasn’t able to replace 7 of 22 screws into the camera assembly. There wasn’t a functional consequence in my case, but I hate knowing that I didn’t get it 100% back together.


2: After removing the NIR filter, I snapped the camera back together (using ZERO of the 22 screws, expecting to crack it open again shortly) and was fully amazed when it powered-on following surgery. Next, I fired-up a burner on the stove and photographed the hot burner across from its cool neighbor. At this point, without the NIR filter the rear-burner coils appear bright-pink, which I expected, but the distinction between the active burner and the inactive burner was not as pronounced as I hoped. So I decided to incorporate another trick revealed by my research.. 

Note: The coils were not visibly red; that is, I could not distinguish the hot burner from the cool burner by sight alone at the time of the exposure.


3: One of the related articles I researched provided the idea of making a DIY Visible Bands Filter out of a slice of unprocessed camera film. So I tried it. I found a nearly-depleted roll of hockey tape to serve as the filter housing. Then I “sharpied” the insides of the tape’s cardboard tubing and used elecrical tape to attach a choice cut of unprocessed camera film. As you can see, the end result fit the camera very nicely —I can’t exaggerate how perfectly the depleted roll of hockey tape fit the camera exterior. Would it work? 

Regarding the cut of film, you don’t want to use the see-through, sepia-toned portion with off-color images, as those are processed negative frames. Instead, you want the black, unprocessed portion. I found a good piece  with plenty of slack at the butt-end of some old negatives.


4: With my visible bands filter attached and an enthused state-of-being, I returned to the stove for another experiment. Once again I fired up the rear burner, waited for it to radiate a reasonable amount of heat, then took another pic. As you can see, the hot burner is about the only thing visible in the photo.

Our stove is white, so clearly my visible bands filter leaks some light into the exposure. However, note how dark/black the forward burner is. Once again, there was no difference between these burners at the time of exposure (at least, as a human can see). This was the sort of result I was hoping to get.


Blurry Images: In my Nikon Coolpix 3100, it seems the stock NIR Filter pulls double-duty, behaving not only as a spectral sieve, but also providing some optical focus. James Wooten’s blog post provides some insight into this issue noting that without the NIR Filter/Glass..

..the camera will be “near-sighted” without it. I used a optical glass that has about 85% transmission in the IR and visible range. It helps if you have a friend that cuts glass for a living.

 I happen to have a specific goal in mind for my NIR camera, so I’m not immediately concerned about this shortcoming; however, if I want to take photos with a reasonable amount of visual acuity, I’ll have to resolve this issue at some point. When and if I get around to that, I’ll return to this post and update my notes.

Related Pages: Each of the following sites povided some form of inspiration or insight, so I wanted to log them along with the article–

http://4photos.de/camera-diy/index-en.html: Many, many links devoted to camera repair, modification, and hackery..

http://www.lifepixel.com/tutorials/infrared-diy-tutorials/nikon-coolpix-5400: These people will do the camera hack for you if you’re willing to part with some money (my Nikon would’ve cost about $200). However, I’m wondering if LifePixel may sell the clear optical lens I need to fix the blurred images problem? Theirs is the first site I’ll check when I decide to cross that bridge. This URL goes to their tutorial for hacking a similar Coolpix model, but they had several such tutorials, so if you’re considering your own project, I recommend perusing their site for a tut describing the hack on a camera model near to the one in your crosshairs.

http://www.geektechnique.org/projectlab/254/how-to-turn-a-digital-camera-into-an-ir-camera.html: Mark Hoekstra does a nice job of presenting his camera hack using several step-by-step images. Read his site in full before you jump head-first into your own poject. In the end, I felt my project required going to a similar “component depth”.

http://www.abe.msstate.edu/~jwooten/camera/lense.html: The original James Wooten post, cited in the body of my article.

http://www.echeng.com/photo/infrared/wooten/IR_Block.GIF: Eric Cheng maintains an interesting site; it was actually through his page that I ran across the Wooten article. Eric did the research and posted some details related to the much-needed replacement lens, which I appreciated.

http://www.diyphotography.net/take_infrared_pictures_with_digital_camera_ir_filter: This post inspired my implementation of the DIY visible bands filter. They do a swell job of documenting the materials and steps involved in making the filter (this is if you don’t happen to have a used roll of hockey tape langushing somewhere in your gear bag).

http://www.jollinger.com/photo/articles/cheap_ir.html: One of the first posts I ran across; however, James’ camera was quite different from mine, so most of the post wasn’t applicable. But, it was open in my tabs when I embarked on this venture, so I’ll cite it anyway.

Home Energy Audit: So, why would I make an NIR camera if my chief interest isn’t taking unique and stunning photographs? Well I hope can photograph my house from all sides to learn where the structure leaks the most heat. On the other hand, NIR isn’t exactly heat, so this may not correlate in practice, but it’s worth a shot if only for the fun of trying. My wife and I have been thinking about replacing some of our windows, so if we can’t afford to replace all our windows, I thought perhaps we could zero-in on those windows that are costing us the most money. I don’t know if this will work but, once I think it’s cold-enough to try it, I’ll take some pics of the house in all its raging un-eco glory and post them here in the article.



Written by elrobis

November 5th, 2011 at 2:57 pm

Posted in Uncategorized

Install GDAL on Windows

with 57 comments

Later posts on this blog will assume a working install of GDAL/OGR, so it’s prudent that I first demonstrate how to get a fresh cut of GDAL/OGR up-and-running on a Windows system.

Windows users have a few options for installing GDAL (see this question at gis.stackexchange,which will point you to Christoph Gohlke’s binaries –scroll down to GDAL, and the OSGeo4W Installer, among other approaches). However, I’ll guide you through what works for me, which can be summarized as follows:

1) Install Python
2) Install the GDAL binaries published by Tamas Szekeres
3) Append your environment Path variable
4) Add the GDAL_DATA environment variable
5) Finally, perform a quick test to make sure everything worked.

First, Install Python

(For reference purposes, I used version 2.7 for Win32):

· Start at the Python homepage
· At left, find “Quick Links (2.7.2)” and select “Windows Installer”
· Complete the download and start the installation
· Accept the default options suggested by the installer

Once Python is installed, launch IDLE (the built-in Python IDE) to get specific details about your Python installation:

Follow Windows (Start)→Programs→Python 2.7→IDLE (Python GUI)

This figure emphasizes where to look:

Python Version Details Output in IDLE

In this case, I’m noting the following values:
· “MSC v.1500
· “on win32

Your values may be different than mine. For instance, you may have v.1400? Or maybe you installed 64-bit Python on a 64-bit system? Whatever you did, you’ll need to be consistent from here on out. Since these values will be important in the next step, you may want to keep the IDLE window open for a few minutes.

Next Install the GDAL Binaries

From the GDAL homepage, you’d navigate to Downloads→Windows, and open Tamas Szekeres’ Windows binaries. When Tamas’ page loads, you’ll need to identify the proper download package for your version of Python, system bit-depth, and VC dependencies.

Note: VC2005, VC2008, VC2010 refer to various dependencies upon C++ redistributable libraries. To get your very own, just Google “c++ redistributable” and pick up the most current release. Alternatively, if you want to play it uber-safe, download and install Visual C++ Express 2010, and it will automagically qualify you for MSVC2010 builds (which equates to the compiler used to build GDAL).

At Tamas’ site, do the following:
· Scroll down to “GDAL and MapServer latest release versions”

· Find a release corresponding to both your installed C++ redistributable library AND your Python install. For instance, I used “release-1500-gdal-1-8-0-mapserver-6-0-0”, which corresponds to MSVC2008 (Win32) in the left column (note the figure).

Finding the right GDAL binaries at gisinternals.com

Selecting a release will take you to a new screen promoting a handful
of EXE and MSI files—to use ogr2ogr, you’ll need two of these.

· Download and install gdal-18-xxxx-core.msi (where xxxx is 1310, 1400,
1500, or 1600, which matches your Python setup). I recommend keeping the defaults while installing.

· Next download and install the GDAL binaries for your version of Python (i.e. 2.6, 2.7, 3.1). Your choice should match the bit-depth and version of Python you installed in the first place. In keeping with my running example, I used GDAL-1.8.0.win32-py2.7.msi. Once again, run the installer and accept the defaults.

Note: The GDAL installer should detect the Python instance installed in the previous section (see Figure 1-9). If not, first review the steps in the previous section to see if you omitted something (this may indicate the GDAL binaries do not match your installation of Python), otherwise select “Python from another location” and proceed cautiously.

The GDAL Python bindings installer should detect your Python installation.

 Set the Environment Variables

We’re nearly finished installing GDAL. In this section, we’ll configure the Windows environment variables (Path and GDAL_DATA) so the GDAL/OGR utilities will be easily asccessible from the command line.

Append the Path Variable

First, append the GDAL installation directory onto the existing Path variable. Do the following to edit the Path variable:

· Open the Control Panel (Start→Control Panel), open System and select “Advanced System Settings”.

Note: Alternatively, you can open the Windows (Start) menu, right-click “Computer”, select Properties, then select “Advanced System Settings”, which I find easier.

· Select “Environment Variables” (near the bottom).

· Find “Path” among the list options and select it.

· With “Path” selected, click the “Edit” button.

· When the Edit System Variable dialog appears, scroll to the end of the series, add a semi-colon (;) to separate the prior path from the new one you’re about to enter, then add your GDAL installation path. I added exactly this:  ;C:\Program Files (x86)\GDAL

· Keep the Environment Variables dialog open for the next task.

Warning: Be sure to confirm your GDAL installation directory. I installed GDAL on Windows XP, but if you run a newer version of Windows, your installation path may be different from mine.

Appending the GDAL installation directory to the Path variable. (THIS IMAGE IS FROM A WIN XP INSTALL! A WIN 7 install will likely require "Program Files (x86)".

Add the GDAL_DATA Variable

Next we’ll add a new System Variable, GDAL_DATA, which helps the various GDAL utilities find the gdal-data folder. Without access to the files in the gdal-data folder, GDAL may have trouble recognizing some projections and coordinate systems, among other things.

In Windows, create the GDAL_DATA variable as follows:

· In the Environment Variables dialog, click “New” (near the bottom).

· In the New Variable dialog, enter GDAL_DATA for the name value.

· For value, enter the path to your gdal-data folder; it should be one level below your GDAL directory, which you can confirm by scouting the GDAL folder. I used:  C:\Program Files (x86)\GDAL\gdal-data

Once again, confirm the gdal-data directory makes sense relative to your GDAL installation folder.

· At this point, commit your changes by selecting “OK” on each dialog.

Adding the GDAL_DATA variable. (Note! On Win 7 you'll probably need to use "Program Files (x86)". I stole this image from my older WinXP tutorial.)

[Update 1 Oct, 2013]

Add the GDAL_DRIVER_PATH Variable

In the same way you added the GDAL_DATA variable, add a GDAL_DRIVER_PATH variable. This variable is necessary for the Python environment to locate various GDAL driver plugins. Like the gdal-data directory, the gdalplugins directory should be one level below your GDAL installation. Of course, confirm the path on your system. (I’m on Win 7 at this point–note this is inconsistent with the screenshot, above).

On my system, the gdalplugins directory is: C:\Program Files (x86)\GDAL\gdalplugins

For those who already have GDAL installed along with the Python bindings–If you’re trying to write custom scripts and you’re getting errors like ERROR 4: ..does not exist ..is not recognized as a supported dataset name, you may simply need to add this additional environment variable. This is especially true if you followed my tutorial for setting up GDAL on your system in the first place! In case of the latter, my apologies.


Confirming a Successful GDAL Installation

To confirm the availability of ogr2ogr, just open a new command terminal and attempt to call ogr2ogr. If your installation is successful, ogr2ogr will start running, fail to find any valid runtime arguments, and close, leaving some helpful usage hints in its wake.

To test ogr2ogr in a Windows Command Prompt, do the following:

· Open the Windows (Start) menu, and at the bottom (“Search programs and files”), type cmd and hit Enter; this will open the Command Prompt.

· With the terminal open, type ogr2ogr and hit Enter (see the image).

Because we added the GDAL installation directory to the Path variable, Windows will attempt to find a valid ogr2ogr executable in the GDAL folder. If all goes well your command prompt should resemble the following figure (the ogr2ogr command is highlighted).

Confirming a successful install by calling ogr2ogr.


[Update 27 Jan, 2014]

Change in testing/confirming a successful install

My initial recommendation to just type “ogr2ogr”, which resulted in an error (which I warned you’d get), was confusing for several people. In hindsight, I agree it was a poor choice on my part. So I like this better:

At the command prompt, type this:

gdalinfo --version

And the OGR module should respond with something like this:

C:\Users\elrobis>gdalinfo --version
GDAL 1.9.2, released 2012/10/08

Assuming you get a version number and a release date, and not an error, you should have a clean install of GDAL/OGR to work with. Be warned, though, getting the version output doesn’t confirm that your environment is setup properly, if that stuff isn’t right, you’ll likely get errors when you actually apply GDAL or OGR to some task.


And voila! Assuming you see the Usage/Help output, you should be good to go with a fresh GDAL/OGR installation.

Cheers, Elijah

[Edit 06.22.2012]

If you plan to create your own Python utility scripts with the OGR Python bindings, I further recommend the following Python libraries:

shapely, numpy, Python Image Library (pil), PostgreSQL Adapter (psycopg), MySQL Adaptor (mysql-python), Python Shapefile Library (pyshp), Dbase/Xbase Adaptor Library (pydbf), matplotlib

Written by elrobis

October 17th, 2011 at 9:33 pm

Posted in GDAL/OGR,Python

Tagged with , ,

Google Maps for Flash/Flex: Dead Man Walking

without comments

“When life gives you lemons..”

Let’s just review the signs:

  • First, the Google Maps Flash/Flex API had not been updated for quite some time.
  • Meanwhile, some developers were sharing a hack to bypass a nasty initialization bottleneck in the Maps Flash/Flex API that stalled map loading in AIR apps.
  • At the other end of the yard, Google lifted its registration key requirement for the Maps JavaScript API V3; yet, the Flash/Flex counterpart remained unchanged in this respect (curiously, it was key-hashing that caused AIR apps to stall).
  • Finally, and though unrelated, Pamela Fox, one of the higher-profile support engineers attached to Maps developer relations, left Google for other pursuits.

In spite of the above indicators, I started an eBook titled: Google Mapping the Shapefile with Flex, PHP, and PostGIS. After four or five chapters, I threw in the towel to enjoy what was left of the summer ..but, maybe that was a good thing, because lo and behold, the Maps Flash/Flex API was a dead man walking. In their defense, Google saw comparatively less consumption of the Flash/Flex API vs the JavaScript flagship, so they pulled the plug on the Flash stuff. It’s an understandable reaction ..but I do wonder if that inertia will eventually affect StreetView, which (I believe) remains in Flash.

I had planned to return to my book over the winter, and I was optimistic about re-approaching O’Reilly with the finished book and a suggestion that it be released as a print title. So clearly I was disappointed by the deprecation announcement. Also, I really like Flex! If I want to make an interactive web map, I’d rather do it with Flex! Where does this leave me?

“..make lemonade.”

That was lemons. But it turns out, there’s a chance to make lemonade. It’s called OpenScales, and it’s a Flex mapping API with some real potential. While investigating OpenScales, I had some difficulty finding examples of raw implementations, where the Flex client UI has a direct line (PHP) to vector GIS data stored on a PostGIS-enabled PostGREsql database. Fortunately it wasn’t difficult to setup. So, rising from the ashes of my failed eBook, and primed by the excitement of finding OpenScales, I decided to start this blog as a place to post OpenScales examples. And that’s exactly what I’ll do.

In the next few months, I’ll post several OpenScales examples here, with the goal of touching upon the following topics:

  • Publish Shapefile or Personal Geodatabase data to an OpenScales map by converting it into usable flatfiles or migrating it into a spatially-aware database and pulling it from there.
  • Load geodata into OpenScales from the following formats: XML, JSON, GML, and KML.
  • Load data faster by enabling AMF support with AMFPHP, et cetera.
  • Pull WMS and WFS data from 3rd-party sources (I’m imagining a weather map).
  • Prepare custom basemap tiles for an OpenScales map and either load raster tiles direct from the filesystem or pull them from a PostGIS database.
  • Publish OpenScales-based applications to Android devices.

If I can do all that in a handful of months, I’ll be a happy camper, and with a little luck, a few people will stumble across this site and they will be too.

Wild! Stallions!

Written by elrobis

October 16th, 2011 at 3:26 pm

Posted in Uncategorized

Tagged with ,