Archive for the ‘PostgreSQL’ Category
CentOS 7 and PostGRESql: Move the PostGRESql data_directory
Long time no see. Scroll down for the answer, or start here for the story..
Today, by happenstance while testing a backup/restore strategy, my coworker discovered that our CentOS 7 installation from ISO/DVD used a default disk partitioning strategy that allocated a majority of disk space to the /home
mountpoint, and a dearth of space to /
, the root mountpoint. In our example, this meant about 2.1 TB available to the home mount and appx 50 GB to the root mount.
Meanwhile, our default PostGRESql 9.x install on this platform established the database instance to a location on the root mountpoint, which naturally limited it to 50 GB of consumption. While 50 GB might be enough for our purposes relative to the database instance, there is some danger in that, if it eventually consumed all 50 GB, it could potentially hose the system. Believe it or not, but it’s possible for log files to burgeon to huge values like this, particularly on systems that get stood up for simple missions and run for a decade trouble-free, all the while their log files collecting information about every db query, every web request, every user login, etc.
This system is not yet in production, so one option available to us was to start entirely from scratch and customize the disk partitioning so more space would be available to the database, and who knows what else. We considered resizing the partitions, stealing some space from the rich home mount and giving it to the less affluent root mount, but that option looked like a mess we didn’t want to step in. So ultimately we decided to try moving the database instance to the preferred mountpoint. But, having reached this decision, and being generally sure of what needed to happen for it to work, we didn’t find a definitive start to finish “how to” that addressed our confusions or a few of the gotchas we ran into. I’ll refrain from discussing what we expected and why we expected it, or a few missteps we made in the process, because in the end that’s just brain pollution.
While I suppose this approach works for most CentOS 7.x and PostGRESql 9.x combos, since versions, repositories, and packager managers are things, your mileage may vary.
No warranty implied!
If you follow these instructions to the letter, you should be able to undo everything by working backwards, which is why we kept the original data directory intact, only renamed. But tread very carefully if you’re considering moving a production database.
<TL;DR>
CentOS 7 and PostGRESql: How to move the PostGRESql data_directory
:
1. To get started, you need your current data_directory
value. If you don’t know it you can get it from your database client (pgAdmin, DBeaver, etc) by connecting with the postgres
user and querying:
SHOW data_directory;
Alternatively, you can get it from psql
, here’s how that looks in my case:
cd /home su postgres psql %TYPE_YOUR_PASSWORD_AT_THE_PROMPT% show data_directory; ## /var/lib/pgsql/9.5/data
It probably goes without saying, but everywhere you see 9.5, your service names and file paths may be a little different, so be aware to adjust accordingly.
2. Next, create a new home for the PostGRESql instance. You’ll want to stop the database to release any locks. We created a location at /home/data/pg_data
, like this.
service postgresql-9.5 stop ## Stop the database. cd /home ## Make your new containing directory structure.. mkdir data cd data mkdir pg_data chown -R postgres:postgres pg_data ## Establish postgres as the owner of the new directory.
3. Now deep-copy the contents of your original data directory to the new location, then re-name your original data directory so that when you restart the instance, you’ll know for certain it’s working out of the new location. Here’s how:
cd /var/lib/pgsql/9.5/data ## Browse into the original data dir su postgres ## Switch to the postgres user for permission safety cp -a * /home/data/pg_data/ ## Make an archive copy (-a) of the contents, preserving permissions and symlinks cd .. ## Browse back up.. mv data/ ORIGDATADIR ## And rename (mv) the original data folder
4. Now PostGRESql is moved, but you have to correct how the system instantiates its service interface. We surmised this from the ServerFault question asked here, and this is how we did it.
Note that nomenclature like this [i]
, or this [ESC]
, [:]
, etc.. means literally press those keys to communicate with the VI interface. Hopefully that makes sense. Also, please see the WARNING discussed in the closing thoughts.
cd /usr/lib/systemd/system/ ## Browse here.. vi postgresql-9.5.service ## Open the postgres service file in VI [i] ## Toggle INSERT mode in VI by hitting "i" #Environment=PGDATA=/var/lib/pgsql/9.5/data/ ## Find and comment-out this line with a pound sign (#) Environment=PGDATA=/home/data/pg_data/ ## And replace it with a line using your new data_directory path [ESC]:x! ## Finally save your edits and close the file.
5. In the final step, you need to impose a change on SELinux as described here (lines 2 and 3), ensure proper access permissions on the new data_directory location (lines 4 and 5), force reconfigure your service instantiation manager ..I think, and then restart the PostGRESql instance.
cd /home/data semanage fcontext --add --equal /var/lib/pgsql /home/data/pg_data restorecon -rv /home/data/pg_data chmod -R 700 /home/data/pg_data chown -R postgres /home/data/pg_data systemctl daemon-reload service postgresql-9.5 start
That should do it—the start command should bring your database back online, operating out of a new location. If it fails to start, you might learn something about the problem by running journalctl -xe
, which we did a few times while troubleshooting the approach.
WARNING. As alluded to in Step 4, when you open the open the postgresql service file, notice the counsel at the top of the file instructing you not to modify the file, but to create a different file elsewhere, in /etc
, and modify it, because changes to the systemd/system file will be overwritten in the event of a package upgrade or something. We couldn’t get that to work for some reason, so we made good notes in our system documentation and hope to keep this concern in mind this going forward. I encourage you to pursue the recommended approach, but at the very least be mindful of this potential reversal down the road and any poor soul who may be following in your administrative footsteps without your knowledge of the problem! At minimum put a descriptive README file in a few key locations, like the root user’s home directory and the database’s default installation path.
Looking back, there are a few changes to this approach that would save a few steps, but this worked for us so I decided to post it as-built. (i.e. Rather than create pg_data, then cp -a the contents of data, just cp -a the entire data directory, which might avoid the chmod and chown steps later on..). I may decide to go back and modify the postgresql-9.5.service startup to use the development-recommended approach, in which case I’ll hopefully update this post, but please don’t hesitate to leave a comment if you know how to get that working.
Create UTFGrid Tiles from PostGIS Tables
I assume you’re where I was about a week ago. That is, you’ve heard of UTFGrid, and now you want to render your own UTFGrid tiles. Perhaps you found yourself looking at this thread over at GIS.SE, but you don’t want to jump into TileStache just now. Anyway if you’ve got a working installation of GDAL/OGR and Mapnik 2+, complete with Python bindings, I’ll show you what worked for me..
Because this is merely an adaptation of Matthew Perry’s original solution, I highly recommend considering his original blog entry on the topic, complete with discussion points and caveats, before proceeding further!
Once you’re ready, head over to GitHub and download some code, specifically globalmaptiles.py
, create_tile_shp.py
, and create_utfgrids.py
. [1] You don’t actually need create_tile_shp.py
, but I snagged all three. To keep things simple put these files in the same folder.
Next, in that same directory, create a new Python file, I called mine createUtfgridsFromPG.py
.
Where create_utfgrids.py
works entirely on shapefiles, createUtfgridsFromPG.py
accepts an OGR PostgreSQL connection string in place of a shapefile path. Fortunately the original OGR code didn’t change, but to use the Mapnik PostGIS driver I had to iterate over the PostgreSQL connection string and store the connection parameters so I could supply them a differently to Mapnik.
Finally, copy the following code and paste it into your new file. It’s basically the same as create_utfgrids.py
, but I changed shppath
to pgconn
and added some code to use a mapnik.PostGIS()
datasource in place of a mapnik.Shapefile()
datasource.
If you’re curious, the comment # ELR 2014.9.26:
flags the few places I changed the original code.
createUtfgridsFromPG.py
#!/usr/bin/env python # -*- coding: utf-8 -*- """ create_utfgrids.py Author: Matthew Perry License: BSD Creates utfgrid .json tiles for the given polygon shapefile Thx to Dane Springmeyer for the utfgrid spec and mapnik rendering code and to Klokan Petr Přidal for his MapTiler code (http://www.maptiler.org/google-maps-coordinates-tile-bounds-projection/) """ import globalmaptiles import mapnik import ogr import os from optparse import OptionParser, OptionError try: import simplejson as json except ImportError: import json def create_utfgrids(pgconn, minzoom, maxzoom, outdir, fields=None, layernum=0): # ELR 2014.9.26: # Original implementation pushed in a shapefile path. #ds = ogr.Open(shppath) ds = ogr.Open(pgconn) # ELR 2014.9.26: # Iterate over the PostgreSQL connection string and pull out values we need # to use Mapnik's PostGIS datasource constructor. pgConnARR = pgconn[3:].split(' ') for kvPair in pgConnARR: if kvPair.split('=')[0] == "host": nikHost = kvPair.split('=')[1] if kvPair.split('=')[0] == "port": nikPort = kvPair.split('=')[1] if kvPair.split('=')[0] == "user": nikUser = kvPair.split('=')[1] if kvPair.split('=')[0] == "password": nikPass = kvPair.split('=')[1] if kvPair.split('=')[0] == "dbname": nikDB = kvPair.split('=')[1] if kvPair.split('=')[0] == "tables": nikTable = kvPair.split('=')[1] print print "WARNING:" print " This script assumes a polygon shapefile in spherical mercator projection." print " If any of these assumptions are not true, don't count on the results!" # TODO confirm polygons # TODO confirm mercator # TODO get layernum from command line layer = ds.GetLayer(layernum) bbox = layer.GetExtent() print "" print str(bbox) mercator = globalmaptiles.GlobalMercator() m = mapnik.Map(256,256) # Since grids are `rendered` they need a style s = mapnik.Style() r = mapnik.Rule() polygon_symbolizer = mapnik.PolygonSymbolizer(mapnik.Color('#f2eff9')) r.symbols.append(polygon_symbolizer) line_symbolizer = mapnik.LineSymbolizer(mapnik.Color('rgb(50%,50%,50%)'),0.1) r.symbols.append(line_symbolizer) s.rules.append(r) m.append_style('My Style',s) print "" # ELR 2014.9.26: # Original implementation using shapefile.. #ds = mapnik.Shapefile(file=shppath) # ELR 2014.9.26: # Parameterized PostGIS implementation.. ds = mapnik.PostGIS(host=nikHost,port=nikPort,user=nikUser,password=nikPass,dbname=nikDB,table=nikTable) mlayer = mapnik.Layer('poly') mlayer.datasource = ds mlayer.styles.append('My Style') m.layers.append(mlayer) print "" if fields is None: fields = mlayer.datasource.fields() print "Fields were NONE. Using.." print fields else: print "Fields are USER PROVIDED. Using.." print fields print "" for tz in range(minzoom, maxzoom+1): print " * Processing Zoom Level %s" % tz tminx, tminy = mercator.MetersToTile( bbox[0], bbox[2], tz) tmaxx, tmaxy = mercator.MetersToTile( bbox[1], bbox[3], tz) for ty in range(tminy, tmaxy+1): for tx in range(tminx, tmaxx+1): output = os.path.join(outdir, str(tz), str(tx)) if not os.path.exists(output): os.makedirs(output) # Use top origin tile scheme (like OSM or GMaps) # TODO support option for TMS bottom origin scheme (ie opt to not invert) ymax = 1 << tz; invert_ty = ymax - ty - 1; tilefilename = os.path.join(output, "%s.json" % invert_ty) # ty for TMS bottom origin tilebounds = mercator.TileBounds( tx, ty, tz) #print tilefilename, tilebounds box = mapnik.Box2d(*tilebounds) m.zoom_to_box(box) grid = mapnik.Grid(m.width,m.height) mapnik.render_layer(m,grid,layer=0,fields=fields) utfgrid = grid.encode('utf',resolution=4) with open(tilefilename, 'w') as file: file.write(json.dumps(utfgrid)) if __name__ == "__main__": usage = "usage: %prog [options] shapefile minzoom maxzoom output_directory" parser = OptionParser(usage) parser.add_option("-f", '--fields', dest="fields", help="Comma-seperated list of fields; default is all") (options, args) = parser.parse_args() if len(args) != 4: parser.error("Incorrect number of arguments") pgconn = args[0] minzoom, maxzoom = int(args[1]), int(args[2]) outdir = args[3] if os.path.exists(outdir): parser.error("output directory exists already") if options.fields: fields = options.fields.split(",") else: fields = None create_utfgrids(pgconn, minzoom, maxzoom, outdir, fields)
Usage..
Once you’ve prepared createUtfgridsFromPG.py
, you can call it from the command line like this..
C:\xDev\utfgrids\createUtfgridsFromPG.py "PG:host=127.0.0.1 port=5432 user=postgres dbname=gis password=passw0rd tables=parcels_pmerc" 12 16 "C:/xGIS/tiles/utf" -f tms,owner_name
- Hopefully the PostgreSQL connection string (
"PG:host=.."
) makes sense. 12
and16
represent the minimum and maximum zoom levels to be rendered, respectively.- The directory “
C:/xGIS/tiles/utf
” is where your UTFGrid tiles will be saved. - And
-f tms,owner_name,the_wkt
represents a comma-separated list of data fields you want in your UTFGrid.
Caveats..
- Both
create_utfgrids.py
andcreateUtfgridsFromPG.py
require your geodata table to be in a Web Mercator projection (EPSG:3857)! - The script assumes a top-origin tile scheme, like OSM and others.
- The script will only work with polygons.
- While the OGR PostgreSQL connection string has a
tables
parameter, this implementation will only accept one table. - The script will create your target directory, in the example case,
utf
, and it will throw an error if you create this directory in advance.
[1] Many thanks to Matthew Perry, Klokan Petr Přidal, and Dane Springmeyer for their collective efforts and for sharing their work.
OGR VRT: Connect to PostGIS DataSource
I needed an OGR VRT for something and didn’t find a clear example on the web all in one place, so here goes.
Somewhere on your system, create a new file with a .ovf
extension. Inside that file, add some XML like the following to define your PostgreSQL connection:
That name=”WKTGrid” is semantically unrelated here. I have been experimenting with including WKT geometry data in UtfGrid tiles, and that name is relative to my experiments. You can provide most any value for name. However, do note that the layer name is referenced in the ogrinfo command.
<OGRVRTDataSource> <OGRVRTLayer name="WKTGrid"> <SrcDataSource>PG:host=127.0.0.1 user=postgres dbname=gis password=l00per</SrcDataSource> <SrcLayer>parcels_cama_20140829_pmerc</SrcLayer> <SrcSQL>SELECT tms, owner_name, the_wkt FROM parcels_cama_20140829_pmerc</SrcSQL> </OGRVRTLayer> </OGRVRTDataSource>
OGRVRTLayer
The layer name attribute you assign can be anything you want.SrcDataSource
The data source value defines your PostgreSQL connection parameters.SrcLayer
The source layer identifies the target table in your PostgreSQL instance.SrcSQL
[Optional] OGR SQL can be used to target specific fields, define field aliases, and even refine the data set using WHERE clauses, etc.
After you make a VRT, it’s smart to test it in ogrinfo before you use it for anything serious. It’s easy to test a VRT in ogrinfo, and if ogrinfo makes sense of it, then you know you’ve got a good VRT.
A command like this uses ogrinfo and OGR_SQL to open the VRT and isolate one feature, showing you its attributes.
ogrinfo C:\xGIS\Vector\parcels\parcels_cama_20140829_pmerc.ovf -sql " SELECT tms, owner_name, the_wkt FROM WKTGrid WHERE tms = 'R39200-02-31' "
In some cases, OGR may have trouble identifying your geometry field, or you may have multiple geometry fields and want to specify one field in particular. If so, note the following changes, specifically the modification to the SrcSQL
node and the added GeometryField
node.
<OGRVRTDataSource> <OGRVRTLayer name="WKTGrid"> <SrcDataSource>PG:host=127.0.0.1 user=postgres dbname=gis password=l00per</SrcDataSource> <SrcLayer>parcels_cama_20140829_pmerc</SrcLayer> <SrcSQL>SELECT ST_AsBinary(wkb_geometry) as geomm, tms, owner_name FROM parcels_cama_20140829_pmerc</SrcSQL> <GeometryField encoding="WKB" field="geomm"></GeometryField> </OGRVRTLayer> </OGRVRTDataSource>
And this is just scratching the surface. Make sure to check out the OGR VRT driver page for a complete list of options available to you.
PostGREsql/PostGIS Implementation of Google’s Encoded Polyline Algorithm
[Edit 30 Jan, 2014]
I added an additional PostGREsql method to perform Polygon encoding by concatenating polygon geometries (delimiter: †) and their inner rings (delimiter: ‡) together into one massive encoded block of ring features. I also provided an example JavaScript method demonstrating how to bring the amalgamated polygon feature encodings into your Google Map.
By some uncanny twist of the fates, I’ve elected to use, had to use, and/or been asked to develop applications that use Google Maps ASCII Encoded Polyline expressions. In previous encounters, I’ve used a PHP class to handle the encoding task, and most recently I wrote a Python method to decode these expressions so that I could return a 3rd-party’s encoded geometries to WKT and import them into a spatially aware database.
So far so good.
However one thing has always bugged me about using the PHP solution–I don’t like using a piece of middleware to handle what I consider to be a responsibility of the data layer. Mark McClure’s page, which is basically the seminal authority on this topic, provides external links to implementations for Perl, Ruby, PHP (note: I prefer the PHP class linked, above), Java, and Mathematica. Also, by searching Stack Overflow, you can find implementations of the algorithm in both C# and VB.Net. But for all my efforts searching, I could never dredge up an implementation for either MySQL or PostGREsql/PostGIS. Bummer.
Looking up, it seems version 2.2 of PostGIS might include some built-in Google encoding conversion methods. While this is cool enough for a hat tip, unfortunately, it’s too inconvenient to wait that long, and even then, there’s no guarantee the implementation will work the way I expect with complex Polygon geometries; for instance, maybe it will encode only the exterior ring of Polygons, ignoring MultiPolygons completely, etc. For that matter, it’s equally possible there could be some bugs. So with this said, and even though the previously-mentioned PHP implementation does the job, my boss was cool-enough to let me take a crack at implementing the algorithm as a PostGREsql/PostGIS function, and then share the results with the world. Since some initial testing confirms my PostGIS implementation works, I’ll just post the code parts and hope others find it useful.
For what it’s worth, if anyone finds a bug or has recommendations for improvements, please don’t hesitate to drop me a line.
Sample query calling the first encoding function on the EXTERIOR RING of Polygon geometries:
(Also works on single-part LINESTRING features.)
/************************************************************************ * Note that the encoding method can accept a LINESTRING only, which * is the geometry type used to represent the ring parts of a Polygon. * To help understand this, and why, please see the trailing discussion * section, which elaborates on this situation further. ************************************************************************/ SELECT GoogleEncodeLine(ST_ExteriorRing(wkb_geometry)) as Google FROM polygons_wgs84 WHERE ST_GeometryType(wkb_geometry) = 'ST_Polygon' LIMIT 10 ;
[Added 30 Jan, 2014]
Sample query calling the second encoding function on Polygon and MultiPolygon geometries:
(Preserves multi-part polygons and their inner-ring parts, a.k.a. “holes”.)
/************************************************************************ * This encoding method will accept Polygon and MultiPolygon geom types. * The output returned is an amalgamation of Polyline encodings, where * individual geometries and their interior rings are concatenated * together using string delimiters, †, and ‡, respectively. ************************************************************************/ SELECT GoogleEncodePolygon(wkb_geometry) as GooglePolygon FROM polygons_wgs84 LIMIT 10 ;
Implementation functions to execute/save in your PostGREsql instance:
[Added 30 Jan, 2014]
/************************************************************* * Pass in either a Polygon or MultiPolygon geometry. Returns * an array of ASCII-encoded Polygon feature parts, including * multi-part geometries and their interior rings. ************************************************************/ CREATE OR REPLACE FUNCTION GoogleEncodePolygon ( g1 GEOMETRY ) RETURNS TEXT AS $$ DECLARE ng INT; -- Store number of Geometries in the Polygon. g INT; -- Counter for the current geometry number during outer loop. g2 GEOMETRY; -- Current geometry feature isolated by the outer loop. nr INT; -- Store number of internal ring parts in the Polygon. r INT; -- Counter for the current inner-ring part. r1 GEOMETRY; -- Exterior ring part isolated BEFORE the inner loop. r2 GEOMETRY; -- Inner-ring part isolated within the inner loop. gEncoded TEXT; -- Completed Google Encoding. BEGIN gEncoded = ''; ng = ST_NumGeometries(g1); g = 1; FOR g IN 1..ng BY 1 LOOP g2 = ST_GeometryN(g1, g); if g > 1 then gEncoded = gEncoded || chr(8224); END IF; -- Get ExteriorRing now; if there are any holes, get them later in the loop.. r1 = ST_ExteriorRing(g2); gEncoded = gEncoded || GoogleEncodeLine(r1); nr = ST_NRings(g2); if nr > 1 then -- One (1) is because interior rings is one-based. -- And nr-1 is because ring count includes the boundary. FOR r IN 1..(nr-1) BY 1 LOOP r2 = ST_InteriorRingN(g2, r); gEncoded = gEncoded || chr(8225) || GoogleEncodeLine(r2); END LOOP; END IF; END LOOP; RETURN gEncoded; End $$ LANGUAGE plpgsql;
/************************************************************* * First of two methods. Pass in a geometry (LINESTRING only). * Returns ASCII-encoded point array for use in Google Maps. ************************************************************/ CREATE OR REPLACE FUNCTION GoogleEncodeLine ( g GEOMETRY ) RETURNS TEXT AS $$ DECLARE pt1 GEOMETRY; pt2 GEOMETRY; p INT; np INT; deltaX INT; deltaY INT; enX VARCHAR(255); enY VARCHAR(255); gEncoded TEXT; BEGIN gEncoded = ''; np = ST_NPoints(g); IF np > 3 THEN g = ST_SimplifyPreserveTopology(g, 0.00001); np = ST_NPoints(g); END IF; pt1 = ST_SetSRID(ST_MakePoint(0, 0),4326); FOR p IN 1..np BY 1 LOOP pt2 = ST_PointN(g, p); deltaX = (floor(ST_X(pt2)*1e5)-floor(ST_X(pt1)*1e5))::INT; deltaY = (floor(ST_Y(pt2)*1e5)-floor(ST_Y(pt1)*1e5))::INT; enX = GoogleEncodeSignedInteger(deltaX); enY = GoogleEncodeSignedInteger(deltaY); gEncoded = gEncoded || enY || enX; pt1 = ST_SetSRID(ST_MakePoint(ST_X(pt2), ST_Y(pt2)),4326); END LOOP; RETURN gEncoded; End $$ LANGUAGE plpgsql;
/************************************************************** * Second of two methods. Accepts a signed integer (LON or LAT * by 1e5) and returns an ASCII-encoded coordinate expression. *************************************************************/ CREATE OR REPLACE FUNCTION GoogleEncodeSignedInteger(c INT) RETURNS VARCHAR(255) AS $$ DECLARE e VARCHAR(255); s BIT(32); b BIT(6); n INT; BEGIN e = ''; s = (c::BIT(32))<<1; IF s::INT < 0 THEN s = ~s; END IF; WHILE s::INT >= B'100000'::INT LOOP b = B'100000' | (('0'||substring(s, 28, 5))::BIT(6)); n = b::INT + 63; e = e || chr(n); s = s >> 5; END LOOP; e = e || chr(s::INT+63); RETURN e; End $$ LANGUAGE plpgsql;
[Added 30 Jan, 2014]
JavaScript method demonstrating how to add Polygon encodings to a Google Map object:
(This client implementation works for either the single or the multi-part polygons.)
/************************************************************* * JavaScript! Pass-in an encoded text block created by either * PostGIS method, GoogleEncodePolygon() or GoogleEncodeLine(), * and render it in your Google Map object. If you don't want * the map to zoom to each rendering, just remove the "bounds" * variable and any references to it. ************************************************************/ function renderEncoded(encoded_path) { var bounds = new google.maps.LatLngBounds(); var $encodedGeoms = encoded_path.split("†"); for (var i=0; i<$encodedGeoms.length; i++) { var encodedGeom = $encodedGeoms[i]; var $encodedRings = encodedGeom.split("‡"); var polyPaths = []; for (var j=0; j<$encodedRings.length; j++) { var ptarray = google.maps.geometry.encoding.decodePath($encodedRings[j]); polyPaths.push(ptarray); } var polygonObject = new google.maps.Polygon( { paths: polyPaths, strokeColor: '#890000', strokeOpacity: 1.0, strokeWeight: 2 }); polygonObject.setMap(map); polygonObject.getPath().forEach(function(e) { bounds.extend(e); }); } map.fitBounds(bounds); }
And some additional discussion..
There are two “gotchas” when it comes to implementing the encoding algorithm with respect to Polygons:
1) Polygons, as geometries, can be composed of many rings. The outer ring is considered to be the boundary, and various inner rings are often called “holes”. So this is a specified, understood, and accepted built-in many-to-one relationship between polygons and their internal ring geometries.
And 2) It’s not rare to find polygon tables containing both Polygon and MultiPolygon data types. I think this happens because ESRI allows it, and so in an effort to play well with others, other GIS systems have accommodated it. At least, I know this is true for MySQL and PostGIS.
Here’s why this makes trouble–Google’s encoding algorithm is only intended to represent individual point arrays as a singular geometry. Basically, as long as your first point equals your last point, it’s considered to be a closed geometry, and you can add it and render it in a Google Map as a polygon. The algorithm itself isn’t designed to represent nested arrays, which would be necessary to render either a Polygon with “holes” or a MultiPolygon, which could potentially define many polygons with holes of their own! As such, I suspect there could be considerable disagreement as to how a Polygon-to-Google-Encoded method should actually handle Polygons..
The only solutions I can imagine for this problem would require “faking” a one-to-many relationship by perhaps delimiting together several encodings to account for MultiPolygons and/or single feature Polygons with interior rings. But this starts to get weird. So to keep things somewhat simple for the sake of the post, I chose to stay true to the algorithm’s intent and return a single encoded geometry expression from my method. And the sample query demonstrates this by calling the method against the outermost ring (i.e. the boundary) of a Polygon geometry type, which PostGREsql regards as a LineString, anyway.
[Added 30 Jan, 2014]
Since I wanted to handle the more complex geometries, I wrote the wrapper method GoogleEncodePolygon()
, to first iterate over ST_NumGeometries()
and gain access to any multi-part features, then second, iterate over ST_NRings()
using ST_InteriorRingN()
–you could also do this using ST_DumpRings()
–and gain access to any interior rings of the Polygon geometries, themselves. Then, for each ring part, I call GoogleEncodeLine()
, and concatenate together all those expressions into one massive block of “compound” expressions. I chose to delimit each geometry encoding using an extra-special character that would never be used by Google’s algorithm; for example chr(8224)
, which corresponds to “†”. I then further delimit the internal ring parts using another special character, chr(8225)
, which corresponds to “‡”, and return all these concatenated together as a compound encoding expression. Then, on the client-side (a JavaScript example is provided above), I merely split the compound expression against my delimiters, loop over the various expressions, and add them to the map individually. Note if you are attaching attributes to your features, you’ll need to remember to include them explicitly to each unique Polygon added to your map.