Posts Tagged ‘ postgresql ’

Upgrading Postgres & PostGIS on Windows

Major upgrades for both PostGIS and Postgresql were released in the last few weeks.  Postgres moved from 9.5 to 9.6 and PostGIS from 2.2 to 2.3.  With both programs moving up major version numbers you have to take an extra step to make sure your upgrade goes smoothly.  In Windows PostGIS is an easy install, it comes bundled with the Postgres installer in the StackBuilder.  This is very handy, except in cases like this were both programs are jumping to new major versions.  You’ll run into a snag if you try to use pg_upgrade to migrate your database from the old to new server.  Pg_upgrade will look for the old PostGIS libraries in your new Postgres directory, not find them and error out.  To get around this you’ll need to upgrade PostGIS first.

To start download the new version of PostGIS for your current Postgres install from here: http://download.osgeo.org/postgis/windows.

Once installed upgrade PostGIS in your spatial databases by running:

ALTER EXTENSION postgis UPDATE TO '2.3.0';

Change 2.3.0 to the version up PostGIS you are upgrading to. Once you upgraded PostGIS in your old spatial databases run the Postgres installer for the new version. The installer will install the new version in parallel with your existing install(s), by default it will run the new version on port 5433 instead of 5432.

After the new version is installed make sure both the old and new server services are stopped, then in each servers data folder open the pg_hba.conf file and change the connection method to ‘trust’.  Next open a command prompt and navigate to a directory you have write permission in and run:

"c:\Program Files\PostgreSQL\9.6\bin\pg_upgrade.exe" -b "c:\Program Files\PostgreSQL\9.5\bin" -B "c:\Program Files\PostgreSQL\9.6\bin" -d "c:\Program Files\PostgreSQL\9.5\data" -D "c:\Program Files\PostgreSQL\9.6\data" -p 5432 -P 5433 -U postgres

Change the version numbers and database superuser account as needed.

Advertisements

Installing PostgreSQL 9.5 and PostGIS 2.2 on Fedora 24

Here’s the easiest way to get a very up-to-date installation of PostGIS up and running quickly.  First up, set up the Postgres 9.5 repository straight from Postgres:

sudo rpm -ivh https://download.postgresql.org/pub/repos/yum/9.5/fedora/fedora-24-x86_64/pgdg-fedora95-9.5-3.noarch.rpm

Next up, install Postgres:

sudo dnf install postgresql95 postgresql95-server postgresql95-libs postgresql95-contrib postgresql95-devel

The -devel package is optional, but I will be building Ruby’s PG gem later so it needs to be there for me. If you have no such plans feel free to skip it.  If you want a GUI to help manage the database add the ‘pgadmin3’ package to the list. Now it is time to initialize the database. Do so with:

sudo /usr/pgsql-9.5/bin/postgresql95-setup initdb

That will take a minute to run as it sets up the initial database. Once the database is initialized start Postgres with systemd by running:

sudo systemctl start postgresql-9.5

If you want Postgres to start at boot run the same command but change ‘start’ to ‘enable’ before you reboot or shutdown. Next up is a little housekeeping. Set a password for the Postgres account.  I like to do this in the database itself rather than on the Postgres user account.  To do so run:

sudo -i postgres psql postgres

This starts a psql session as the Postgres user then in psql allowing you to set the password with:

\password postgres

Next up since this is just a development box I’ll change Postgres’ login method from Ident to MD5.

sudo gvim /var/lib/pgsql/9.5/data/pg_hba.conf

Change all ‘peer’ and ‘ident’ to ‘md5’. If this is a production server you’ll want a more secure and robust login method. However, if you don’t want bother with passwords at all instead of ‘md5’ replace them ‘trust’. Next up is installing PostGIS. It and nearly all its dependencies are nicely packaged in the Postgres repository making it easy to keep the two in sync. Install PostGIS with:

dnf install postgis2_95 postgis2_95-client

As a test you can create a database called ‘test’ and enable PostGIS with:

sudo -u postgres createdb test
psql -d test -U postgres -W -c "CREATE SCHEMA postgis; CREATE EXTENSION postgis WITH SCHEMA postgis; ALTER DATABASE test SET search_path TO public, postgis;"
psql -d test -U postgres -W -c "SELECT postgis_full_version();"

If the last command returns information about the version of PostGIS installed you’re good to go. Have fun!

A nice explainer for PostgreSQL’s EXPLAIN output

This site does  a great job of analyzing the output of PostgreSQL EXPLAIN queries.  It will show you which steps of a query are slowing you down giving you the chance to rewrite your query, change indexing, or reorganize tables in order to speed things up.  It’s very helpful!

Postgis 2.0.2 hits Arch repository

Arch has posted the PostGIS 2.0.2 release that came out December 3rd to the Community repository. After Pacman updates the package Postgres needs to be made aware of the upgrade.  Use:

ALTER EXTENSION postgis UPDATE TO "2.0.2"; 
ALTER EXTENSION postgis_topology TO "2.0.2";

Chernoff Faces of Bike Rides

Chernoff faces of 4 bike rides

Chernoff faces from 4 bike rides

I plugged some of the fit file data that I stored in a Postgres database into R to generate the image above.  Two R libraries, RpostgreSQL to connect to Postgres, and aplpack, to generate the faces were used.  Once the libraries were loaded the following R command pulls the data out of Postgres:

rs <- dbSendQuery(con, "
SELECT ride_date
, ST_Length(ST_Transform(ST_Makeline(the_geom ORDER BY ride_time),26916)) As length
, date_part('hour', max(ride_time))-date_part('hour', min(ride_time)) As ridetime
, avg(temperature) As temp, avg(speed) As avg_speed
, avg(altitude) As alt, max(altitude)-min(altitude) As alt_diff
, avg(cadence) As rpm 
FROM ride GROUP BY ride_date ORDER BY ride_date
")

Then fetch the data from the record set with:

rides<-fetch(rs, n=-1)

And finally plot the faces with:

faces(rides[,2:8], face.type=0, labels=rides$ride_date)

The faces function draws the faces based on the order of the variables.  The features are modified in this order:

height of face 
width of face 
structure of face 
height of mouth 
width of mouth 
smiling 
height of eyes 
width of eyes 
height of hair 
width of hair 
style of hair 
height of nose 
width of nose 
width of ear 
height of ear

If you don’t have enough variables to fill this list it will wrap around and start again from the top.  For more configuration options use ?faces after you load the aplpack.

There are two small problems with the data, and they both caused by stopping the timer on your bike computer, then restarting at a later time and place.   The time ridden is calculated here by simply subtracting the minimum time from the maximum.  If you rode for 3 hours and took an hour break the query would return a time of 4 hours.  Similarly, if you ride for a mile, hop in a truck and drive 10, then ride for another mile you’d get a result of 12 miles.  There’s a way to work around this, I just haven’t figured it out yet.  I suppose the best solution is to harden up and not take breaks.

The data is stored as points in the database, to get the length ST_Makeline “connects the dots” from point to point.  An awesome feature of Postgres 9.0+ is the Order By in the middle an aggregate function.  It helped in this case because the first ride graphed doubled up and crossed over itself several times.  This lead to the query planner making some interesting decisions on where the line should go.  Forcing the process to follow the points in order the line followed my route perfectly.

Ranking the Indiana Breweries by Population

In my last post I used PostGIS to calculate the number of Hoosiers that live within 5 miles of an Indiana microbrewery.  Now I want to rank each brewery by population.  The total is going to be a little more than the 2.5 million-ish I came up in the last post due to the lucky duckies who live with 5 miles of more than one brewery.

Flat 12 of Indianapolis, with over 243,000 people within 5 miles of their location is our winner.  No big surprise, but the top 11, and 12 of the top 13 are all Indy breweries.

A few breweries, most notably Three Floyds in Munster are lower than they should be because I only used Indiana census data.  Three Floyds is only a half mile from the Indiana / Illinois border.  If I included Illinois data the nearby Chicago population would have pushed them up the list.

The full list and the SQL used to create it are below the jump.

Continue reading

Views, PostGIS 2, and QGIS

If you wanted to create a view with spatial capabilities with previous versions of PostGIS all you needed to do was manually register the view in the geometry_columns table.  This is no longer possible since from PostGIS 2.0 forward geometry_columns is a view, not a table.  However, if you use typemods to define the geometry column your view the geometry_columns view will pick it up.  For example, I wanted to take some of the data I loaded from my .fit files and convert the metric data to imperial units, and reproject the data from WGS84 to UTM.   Here’s the view that does that for me:

CREATE OR REPLACE VIEW rides.ride_utm AS
SELECT id, no
, utility.units_from_to('meter', 'feet', altitude) AS edge_elev
, 32+(temperature*(1.8)) AS temp_f, speed*2.2369 AS mph, cadence
, ride_date, ride_time, elev_dem
, ST_Transform(the_geom, 26916)::geometry(Point, 26916) As the_geom FROM rides.ride;

The units_from_to function is something I picked up from PostGIS in Action.  The geometry column uses ST_Transform to reproject the data, and the typemod after defines the column as having point geometry with my UTM projection.

With the geometry column defined in that manner the geometry_columns view is properly populated.  When you want to use this view in QGIS it is visible as a loadable layer.  However, before you can use it you must tell QGIS which field is the unique row identifier.  To the left of the layer’s SRID number there will be a drop box on the PostGIS layer load screen.  This dropbox will give you a list of possible fields to use as your ID column.  Just select the column that contains unique values and you’re good to go.